Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
She's the AI Czar and will prevent slaughterbots - just like she fixed the borde…
ytr_Ugwdrpk9y…
G
AI can’t want to become conscious, because the ability to desire depends solely …
ytc_UgzbD9R0L…
G
Excellent talk.
If you consider all life on Earth from the smallest with a brain…
ytc_UgwIHmqZX…
G
To be fair AI isn’t inherently bad we’re just using it in the shittiest ways pos…
ytc_UgxmBZ-Ps…
G
Just thinking carefully what you use it for. Use it for things you would pay a h…
ytc_Ugwo48GUi…
G
i'm working on something called AI godsin in short it establishes links to whoev…
ytc_UgwowMnFJ…
G
are humans any better? its going over all these extreme hypotheticals but seriou…
ytc_UgyVD9GjJ…
G
God created human then human fail. Human created robot n they wish it success? W…
ytc_UgzNesGze…
Comment
The very first task that should be assigned to AI before it gets powerful enough to overcome controls is to analyze the best strategy for humans to control AI and report that result.
youtube
AI Governance
2023-04-18T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwfLP0cUJyzkNmv95x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxI1kCIu41A4GS67T94AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzS_pEd-qLuIj9j1Pt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyIurwijJ3I3sdYy_R4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwTP8IU24uhNtIrTul4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIB0UmPzM6eU0L1094AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQxMdl2sZHmQkuKel4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyxTjmRqqfcjqQRq1d4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyEswdxkPkOXCP_RbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxIeOv6Q3sR66RW7IB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]