Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This boy's parents are grieving - naturally, they are looking for someone to blame in their pain. OpenAI is a big target. Please don't be so cruel in your responses. Should the parents have been more involved in their son's life? Yes - but that doesn't guarantee that the child won't still hide their suicidal ideation from their parents. The question this story poses is ultimately about whether OpenAI has a moral responsibility to ensure their product is ethically designed. I think it makes sense that ChatGPT should have guardrails which prevent discussions of methods one can use to commit suicide, especially if triggering those guardrails also comes with supportive messaging encouraging the user to seek medical attention. However, Chat is easily corralled into giving up info on sensitive topics if you insist you're just HYPOTHETICALLY wondering how you HYPOTHETICALLY might HYPOTHETICALLY find a way to end your life. This is a very sad story, but I don't know if the parents will find the closure they seek by suing OpenAI.
reddit AI Harm Incident 1756217251.0 ♥ 197
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_narzdkx","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_nas2b56","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_narw2tv","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_naubsq7","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_narmc5t","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]