Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans just want someone to talk to. You can make your chats private and not sav…
ytc_UgzSC9aKF…
G
Having the AI “joke” about taking over in order to desensitized people. 🤷🏻♂️ ju…
ytc_Ugz_azBGO…
G
If people in the United States that are pro-choice want to support medical progr…
rdc_dcwxz10
G
SICKENING! Are we losing so many Caucasians that instead of birthing them they’r…
ytc_UgybufC-b…
G
Humans: ‘AI can’t have feelings.’
Also humans: cry when I roast them in the comm…
ytc_UgyFIPw8s…
G
they'll get in humongous legal issues when the AI insults a child or be racist b…
ytc_UgwJ3cLv1…
G
AI is a dream scenario for employers because it never sleeps, doesn't call in "s…
ytc_UgzNQwK3e…
G
1 if adults dont train their kids to NOT BE ADDICTED TO SHOPPING, the "demand" r…
ytc_UgzUTBxU2…
Comment
Does this guy not know about the existence of DARPA and how much they know about AI already? I mean DARPA does advise the government so this corporate Ahole was so full of it it was pouring out his mouth. Self regulation never ever ever ever works out except for special cases like the Bar, and that is a totally different beast of self-regulation general business does not.
youtube
AI Responsibility
2023-05-15T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwRpfDpwDxHu4lz1hB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwIRNiIRN7KhpQypJF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwZwhkCV6Jrq9dPIax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXEO6YeSIDVwd7iEt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxCB4IJYYJhvSt6QfZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxnys6xP1U8bc5klix4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8cfN_kyu8Mc19MDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHIFtq6gaIaxHIlO94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3TUb41pgpWCB4GBx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz8p3dY0jXARwCRPwR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}
]