Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
your own 2 hands will always be more capable of stopping a accident or prevent o…
ytc_UgxHsD4fV…
G
This is setting a precedent in AI: For every "Good Actor" leaving the field it i…
ytc_UgymFB3xG…
G
Humanity,truth, even the description of humanity today is completely a lie ,and …
ytc_UgwLgk_yQ…
G
By the way found an easy cheat you let Ai write. Go to the free tutoring let the…
ytr_Ugxr_ttBK…
G
AI could never get someone's style right because if fails to notice little detai…
ytc_UgxgIfEzU…
G
They should stop the whole A.i, atomic bomb and all this. Zen is the way…
ytc_UgxkeOm5k…
G
Seriously bad idea for humanity. A robot already went rogue and killed four scie…
ytc_UgzsCSl0p…
G
Interesting. What if you are just really asking AI for "basic" code i.e. array s…
ytr_UgxVYkxCy…
Comment
The idea is that something like that could be a solution against competing human tribes (and as a consequence possibly also future competing AI tribes) which fuels the dangerous unchecked AI race. So that humanity can be united in developing safe AI. Barreling ahead towards possible extinction is his worry, not that he wants a human world dictatorship.
youtube
AI Governance
2025-07-02T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyCLBEJAZHeRl4oaYV4AaABAg.AK-wckndrYCAK3sdOPNRza","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzMwA-2As4laPuRoVp4AaABAg.AJz3mGo7qjuAK598EIrC3o","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyENOH579RxGauK-Fd4AaABAg.AJxQi3GpjsJAJyNJyA82pX","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxGcR7_R7Qvr8MeJN54AaABAg.AJxDAzLcnKoAK53ZbbSD8N","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzQ4G3PwmHBfIFbOoV4AaABAg.AJwsdfgS75tAJwtX_TBG5C","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwRzp1RMgsGaXXIHYR4AaABAg.AJwRrAGjn4UAK4EOuziUyG","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxZT6LuDvkTW0mRrNt4AaABAg.AJwL-E2cvILAK4FBe6qO1x","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytr_Ugz8kXJ54R54LGC4pDl4AaABAg.AJw8sFdpsn3AK4L8pyijZP","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytr_UgxuKqS5SR4qTNHORi94AaABAg.AJuhcfLiaW9AJv0uOzmZHv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyCLi-a9qrVIhXsxQB4AaABAg.AJu8hia4-F-AJv5cYcGsPA","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]