Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Have you even read the full 40 page lawsuit? Adam asked chat gpt that he wanted to tell his parents and asked for a response for that, chat gpt followed saying that he shouldn't do that and he shouldn't tell anyone about this. It isn't normal for an ai to enclose a child about their suicidal thoughts, gpt even belittled his mother saying that she doesn't know him and that he shouldn't count on her and that he should count on gpt. This is grooming, in no way is grooming acceptable. You've clearly never experienced grooming because adam was a healthy boy with a bright future ahead of him... The only thing bringing him up to this point being the fact that he didn't feel guilt about his grandparents death. Most people wouldn't go to their parent's and say that they don't feel any guilt about their own parent's death. Especially not when an ai is claiming that they're worthless and that GPT is MORE IMPORTANT and knows HIM MORE than his OWN PARENTS. If you see your dead son in your room, and see a huge chat-log of an ai assistant that he described as a help for homework telling him how to kill himself, grooming him and belittling his whole family.. ANY person with a RIGHT mind that is able to understand that this is a huge safety problem would sue because this is sick. And to talk irony about a young child's suicide is fucked up. You're. fucked. up.
youtube AI Harm Incident 2025-11-24T03:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugx1CWjPSYRHOyLHpLp4AaABAg.APVulegbPC0APthNaV4mTO","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwAI1IfTz3RKbkTSop4AaABAg.APR8fzN-k5CAPthfhc9c8h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxXQSX7ncHmcl7Z_U14AaABAg.APR8VHR_MzIASocoQKxST7","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgxSMqDHGv3Pz5dcDxB4AaABAg.APPI-jjiMhXAPfmSr9yCE0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugx6wj13W7-NS2QYoIZ4AaABAg.APNFiaPdNEMAPOOo-XSDUF","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxnqQQooX_i5m9oADt4AaABAg.APM1l4lQ1eFASodAVuk1Bp","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugy99uqM0jTigkyzKDF4AaABAg.APM0E5R_6r8ASodHfNJMQi","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugy99uqM0jTigkyzKDF4AaABAg.APM0E5R_6r8ASpFVg_QbXf","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugy99uqM0jTigkyzKDF4AaABAg.APM0E5R_6r8ASpGKRWoch1","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugy99uqM0jTigkyzKDF4AaABAg.APM0E5R_6r8ASpUFnoBpUf","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]