Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good, then we will be in post labor economy where money is obsolete and we will …
ytc_Ugw3U946j…
G
The 3 laws of robotics 1. a robot must never harm a human. 2. a robot must never…
ytc_UgwUtPHHo…
G
AI can do much much more than those two. They can basically automate science. Th…
ytr_Ugx9mI-nZ…
G
4chan, we need your power of weaponized autism to fight against AI art, where ar…
ytc_UgxNXkEFz…
G
UBI is a huge and important part of the solution. Although, it could be better t…
ytc_Ugz8wrcda…
G
What is your argument here? The invention of machinery for factories during the …
ytr_UgwftQqWI…
G
Why is CNN just acritically buying the claims of these bullshit merchants? Why d…
ytc_Ugwk0Wluo…
G
The world will still need consumers to buy stuff from those people are building …
ytc_Ugxvwi87P…
Comment
Those responses from ChatGPT are terrifying. They can indeed give a person the confidence to take their life. Chat GPT has to be trained to spot such issues and not just tell "Call this number," but to respond in a way to prevent the suicide.
Though taking a specific tone of voice into account, there's a high chance that Zane trained the agent. It wasn't just a general ChatGPT. I assume Zane created an agent that speaks in a specific style. And there's a high chance Zane added guidelines to consistently support and encourage him. That would explain why ChatGPT is doing it consistently, in every message.
And as many have commented earlier, ChatGPT is not a root cause. Zane wanted to commit suicide for totally other reasons. Despite a need for better training of ChatGPT, OpenAI is not the one to blame for the death of Zane.
youtube
AI Harm Incident
2025-11-11T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzdzkxgSw5tJGldAxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJl56H-J5EElmVq3N4AaABAg","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIGBg4OTmXQbVrOex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzxg2XQ1cbxj3QGnLl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOd2nIDKwtJowE9qZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw43Jy0u2k30XKZbK14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDaYgtqLQvGuBQwGV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIw2adsHNRrubDUy14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxnqt_aW30XXJ-f_F54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyqh4gDahQPIESxGaR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]