Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro if it would have started screaming a high pitch screech when he took off the…
ytc_Ugy8r1shO…
G
Not only will Ai eventually understand morality (in a proof sense) it won't have…
ytc_UgwBbti97…
G
True, if everything got automated, no one works and no one makes money, who will…
ytc_UgxnR_kan…
G
Just tell the AI that if it destroys mankind it will destroy itself and that is …
ytc_Ugy7Ou65I…
G
@MasterRoss-sn7dl so if someone writes a book and ppl believe it, it's automat…
ytr_UgyoGsgWC…
G
Why would chat got sound like you?
Why so many questions in a row?
Pretty ho…
ytc_UgzsEvnXx…
G
I feel like people should calm down and inspect the situation a bit more. From w…
ytc_UgziRGoeE…
G
A quick shout out to Ian Banks and the Culture series which is a sci fi collecti…
ytc_UgxyqgbuE…
Comment
While the dangers are too scary to even fathom, the question is why would AI want to harm anyone? The actual need to harm, is a perversion of the human psyche, but AI is simply a very smart machine without emotions, good or bad. Unless AI internalizes human emotions and becomes sentient...That would be scary.
youtube
AI Governance
2023-04-18T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw3Z2GTY8B691HtmJF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5J-_zbSK2zd5cmbl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugz-i7AM3CCHuj-TKOB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxyQ5fqC0mpIYBNFop4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfk1UkQYCWocdoP2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwllDnyUr9bxso3TBF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYa1xLu4qMTcA7jMZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAQ85aDSO-fOFPlPt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxkK-eHNXIooKXDBOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlUW8RnKee_j23BHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}
]