Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something about this smells phishy, chatgpt will always help in anyway possible, and would NEVER encourage anything close to harm or suicide and advises everything possible forn you to get help and fight against depression and suicide… has to be jailbroken prompts.
reddit AI Harm Incident 1756240831.0 ♥ 7
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_narsim8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_naryln0","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_natx0rd","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"rdc_naxmnr8","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"sadness"}, {"id":"rdc_nasgjy2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]