Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember in the '90s, robots were supposed to take all the jobs... What happen…
ytc_UgxFJ4V86…
G
I think that AI will kill humanity and take over , right now people living on th…
ytc_UgzDetSWy…
G
There will be mass ai hacking just you wait and see only the internet computer p…
ytc_Ugx2z03sW…
G
Am I the only one who doesnt completely blame the AI? People be framing it like …
ytc_Ugy7hzkUf…
G
💀 I did cmt on one of the video about how ‘amazing’ AI Sora is, ok just to make …
ytc_Ugzlb38A0…
G
I'm glad somebody is finally talking about the risk AI brings where some people …
ytc_UgzreRTWa…
G
"concentration risk"..All i see people trying to escape the concentration camp r…
ytc_UgyKih1vi…
G
The assumption is that AI will be benevolent. AI WILL determine that humanity, …
ytc_Ugyr5v1QD…
Comment
I've counseled (in a religious role) families where a suicide occurred. Humans often say the wrong thing to folks who end up ending their lives because the issue is so complex. We all have different breaking points, and at times even words that were meant for encouragement to keep going is interpreted by someone deeply troubled as permission. It's unreasonable to expect a disemboded AI to have embodied human discernment. On the one hand, OpenAI could easily just settle this. Money really is no object. But without litigating it there is considerable downstream risk for all AI companies. Regardless, proving that a human or an AI was the proximate cause of a suicide is extremely difficult.
youtube
AI Harm Incident
2025-11-09T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0GOj6na17Xt3y5tl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8mqSaVj0xCcv55zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLRPG8jMoEgh6dqN94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyaMBZHJSTbjtSBbt14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxt6P1fKu4MQz6JRAd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzACYjYdKh-PXGDFnF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwySvHtondLMtlyFlJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx60U4qL3DPgFUD3GJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNLMfeB8wmPpkxlLV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzM375HDaU1jFelYH94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]