Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its really sad, but it sounds more like Adam was using ChatGPT like a ouija board -- where he was guiding everything and simply using ChatGPT as a trained avatar to feed his confirmation bias and to guide him where he was already set on going -- than actual/real council. I have a feeling he would have talked himself into ending his own life with or without ChatGPT... if he wasn't wanting to get (human) help.
youtube AI Harm Incident 2025-08-26T16:1… ♥ 62
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionunclear
Coded at2026-04-26T18:03:32.401335
Raw LLM Response
[ {"id":"ytc_UgzVlPAZvA2CVmxCBbR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwN4gTSEpJt8b7R-R4AaABAg","responsibility":"society","reasoning":"unclear","policy":"unclear","emotion":"sadness"}, {"id":"ytc_Ugzd89auABj7QcsMu_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"unclear"}, {"id":"ytc_UgxMUw97iVO4MfHUHb54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzqRsh9Z3kPlFsWCaV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"sadness"} ]