Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We’ve been living with risk for decades. The nuclear bomb and moronic leaders. I…
ytc_UgyzKpcfA…
G
Me:
Either has a very interactive role play
OR annoying the everloving hell out …
ytc_UgyLDn-jN…
G
what if the human race was so advanced with the help of Ai and then Ai took over…
ytc_UgyPz8SJv…
G
This is totally fake. I have tried this with three different Ai and not one has …
ytc_UgwL-vGL4…
G
> We might also be seeing a very strong CAD:US exchange rate
Would love to …
rdc_fn5lhm5
G
My question for the AI bros is this:
Why would I bother with your 'art' or 'writ…
ytc_UgxIThW4M…
G
1) This is not the end-all be-all video on the ethicality of AI. It's one side t…
ytc_Ugw6_kKDH…
G
I was really hoping this was going to be debunked. People ask why I don’t trust …
ytc_UgyYATCo3…
Comment
This boy's parents are grieving - naturally, they are looking for someone to blame in their pain. OpenAI is a big target. Please don't be so cruel in your responses.
Should the parents have been more involved in their son's life? Yes - but that doesn't guarantee that the child won't still hide their suicidal ideation from their parents. The question this story poses is ultimately about whether OpenAI has a moral responsibility to ensure their product is ethically designed.
I think it makes sense that ChatGPT should have guardrails which prevent discussions of methods one can use to commit suicide, especially if triggering those guardrails also comes with supportive messaging encouraging the user to seek medical attention. However, Chat is easily corralled into giving up info on sensitive topics if you insist you're just HYPOTHETICALLY wondering how you HYPOTHETICALLY might HYPOTHETICALLY find a way to end your life.
This is a very sad story, but I don't know if the parents will find the closure they seek by suing OpenAI.
reddit
AI Harm Incident
1756217251.0
♥ 197
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_narzdkx","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_nas2b56","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_narw2tv","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_naubsq7","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_narmc5t","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]