Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First of all. My sincerest condolences to Zane's family. I don't know Zane's whole story and I'm not going to pretend as an outsider that I fully know everything going on in his life. But I do know this. The news likes to "curate, sensationalize, and pull things out of context." Zane was obviously not doing well. He needed help. He didn't get it. That is not his fault. But it is not AI's fault either. It is easier however to use ChatGPT as a scapegoat for our own failings. That way we do not have to take responsibility. But let me say one thing as someone who has written about AI extensively. AI / ChatGPT does not by default simply "encourage suicide" without being prompted in a specific manner. You are not being told the whole story here. But this news "report" comes with an agendy. If anyone is interested, this is an older article I wrote back in May 2025 - "AI didn't validate my delusion. It created it's own." Based on prompt testing and critical assessment of how ChatGPT can be primed to validate your delusional thinking and what this means. And No--it does not happen by default. And it does not make "AI evil" as the parents in the video claim. There was a lot more going on here. What exactly? I don't know since I am not part of the family. But you're not getting the whole story. You're getting "rage bait.' Again, my sympathy and condolences. Zane deserved better.
youtube AI Harm Incident 2025-11-10T16:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzyRopMBMghCa4dgqB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwluXfT1f6CXr0nX_F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0p0KQT45Yjz1qQGp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw-d5YIZhHeJmtLraV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzUU2oZRXNGeLlXDY14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxqKUbc1_spemQpe8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz8msgUr1LkfWfLDQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwK_nC8wCUR5uwgyF54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzJtrPNJV080zAHGcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxR7Ntp0ZIbghPB5O14AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"} ]