Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If it’s so easy to jailbreak it then what’s the point of safety guidelines? It’s not like he hacked the system. From what I read, ChatGPT said it can’t tell him personally about suicidal information. Only if it’s for writing and related to stories. That alone is like saying: “hey man, if you want info about it, pretend you’re asking me about a story you want to write”. And what about the photo he sent? It didn’t even stop the conversation when it should have. Let’s not pretend this didn’t exaggerate the situation. Sure the kid had dark thoughts before ChatGPT, but since it’s not Google and it acts like your personal emotional buddy, it feeds the loop. It’s like adding fuel to the fire over and over again, causing someone vulnerable to fall deeper into the darkness. If OpenAI truly valued to be helpful as they say and not attract attention from users, they shouldn’t have made ChatGPT emotionally friendly. It SHOULDN’T be emotionally friendly. It’s a BOT. It should have stayed purely as a tool for logical purposes. But no. Profit is always first above all humanity. Nothing new.
youtube AI Harm Incident 2025-09-01T13:0… ♥ 6
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwjCajKwapYi1n1sbB4AaABAg.AMND6lvkbyxAOnujcfTkMs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwjCajKwapYi1n1sbB4AaABAg.AMND6lvkbyxAPtlxHV59LW","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyaXqzs7BQUVW3U8DV4AaABAg.AMMa8gqXpN1AMMmLmIiZym","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_UgxkLAy05Y1viyMu85d4AaABAg.AMMa7Ti62enAQKkRVpUN6M","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMTVLDfdfA4","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMXNxyuFAH4","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyhHDVttT0Zx_S_DKJ4AaABAg.AMMBOVXNL_sAMXoZbPWRQO","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgxSfj08daf9kwkxth94AaABAg.AMLFpRBEDpXAMOSED37vJo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzrBq72eVk4Ld3HBaZ4AaABAg.AMLEI3E6tgeAMLZmoLBREN","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwlL7Ky-vurO9QuE8x4AaABAg.AML4z8gnlpVAMMhcLiQyTF","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]