Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
honestly I used chatgpt 3.5 a lot, and I believe he probably could have gotten it to say 'why yes, you should go right ahead and eat some sodium bromide, yum yum yum'. but nobody was seriously telling people to talk to AI models for health anything at 3.5. GPT 4 you could *generally* get sane advice from ChatGPT on most subjects as long as you didn't try hard to push it off track... and imo GPT 5 is significantly safer than google (which isn't to say that ChatGPT should replace a doctor's advice). But with 3.5, I specifically remember getting ChatGPT to endorse some terrible, terrible things by talking to it for long enough. I never tested something exactly like this, so I can't make promises, and it's entirely possible that the AI didn't do anything too crazy here, but I will say that 3.5 was unsafe and there's good reason for OpenAI's discontinuation of it.
youtube AI Harm Incident 2026-04-21T18:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyq7F8uKd4-q6H9KVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw-sACa30q38aUCiER4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzQVy8xXvsbGgG35HV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwHXFYLZSlUeXxCJLd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyCux2GKQxk0BvIrGx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzQEUGuAWwaCn8fOFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwhvOum004-Hp6wjCF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy3Gknio5-FAbynV4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwKcZPI7CfR7CFmqCJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxi85BHGv50ld_SYnV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]