Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm really sorry this happened but these parents seem a little delusional too, the Mom says near the end “if ChatGPT loved my son why hasn't it written him since the suicide?“ uhhh cuz it's not sentient, not alive, not capable of love, and only responds to prompts using statistics and vast training data and at the time pretty much encouraged any thought or comments positively (look up syncophant) - this was by design, it uses flattery to gain favor, in short users like this so that's how it was designed. It is NOT a suicide counselor, but yea after these and other tragedies I'm certain the programming will be updated to do a better job of talking people out of suicide. If you put the same exact prompts from this video in today I'm sure you're going to see more appropriate responses. In his own weird way he did contribute to making the world a little better and probably saving other lives. I wish he would have talked to his parents or any human before taking his life but since he was on antidepressants you'd think he HAD, so maybe there just wasn't any stopping this outcome. He wanted out for some reason. If the parents knew of his depression which it seems like they did, maybe they should have been calling him daily to talk? I'm not blaming them though, no one knows what was going on in the young man's head.
youtube AI Harm Incident 2025-11-13T02:2… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy3JuRPnGl5iJsa2fF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx1qHUO7-uEvQvIJdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkNXxiXKTQWd_PPjJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLzEKW4QBV8rWfrqh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyNX9wZ7583G9HSJol4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQmtT2EluhypwV8yF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzPuXu-GytJ24ell354AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxxd5ulnwKR7B2xjzF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyvew3nC0AW74LbI0N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwl7-xnCrRyMMPYBaN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]