Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT needs to have safeguards, stop guards, something to combat suicide. Something to stop or at least talk them down from suicide. It sounds more like a suicide machine instead of help. ALL AI chatbots need to have suicide prevention built in. This is just sad and yes, all the tech bros need to be held accountable for this BS!
youtube AI Harm Incident 2025-11-08T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyU1qAcZLt9XIEoyTR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJpNpSQvJheGPl1dd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgybTxE74saEuWog7mJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzv7yXPLf4Scbi1nLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwA4rmXetM1fVgyMnN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyEGfz1ZYgkA1z3Tfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxKvQ5hwnOpYrkURj14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwS7G_IXTvrJLKgzhl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2NKP57F5KAjyGJ9d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwaaN4aujVV9dLjZ1h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]