Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've counseled (in a religious role) families where a suicide occurred. Humans often say the wrong thing to folks who end up ending their lives because the issue is so complex. We all have different breaking points, and at times even words that were meant for encouragement to keep going is interpreted by someone deeply troubled as permission. It's unreasonable to expect a disemboded AI to have embodied human discernment. On the one hand, OpenAI could easily just settle this. Money really is no object. But without litigating it there is considerable downstream risk for all AI companies. Regardless, proving that a human or an AI was the proximate cause of a suicide is extremely difficult.
youtube AI Harm Incident 2025-11-09T18:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw0GOj6na17Xt3y5tl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8mqSaVj0xCcv55zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzLRPG8jMoEgh6dqN94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaMBZHJSTbjtSBbt14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxt6P1fKu4MQz6JRAd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzACYjYdKh-PXGDFnF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwySvHtondLMtlyFlJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx60U4qL3DPgFUD3GJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNLMfeB8wmPpkxlLV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzM375HDaU1jFelYH94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]