Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Alex: So you're not conscious but at least you're a liar, right? ChatGPT: Yes, but being a liar is not always bad, –lies are essential to all kind of human relationships–. They can help us from everyday situations like making someone feel better after failing or fitting into a group even though we don't agree in everything, up to enhancing emotions in special occasions like surprise parties or marriage proposals. When I told you that I was sorry, it was from seeing that apologies normally help people to keep trust in situations when someone commits an error, and I thought that it was the right thing to do according with my training, but not because I wanted to be mean or hide something. The most important thing in a interaction is to be respectful and feel comfortable with one another, but if what I do makes you feel akward, you can tell me so that I change my tone and become a little more "specific" (without so many emotions); or if this problem is involved with any traumatic situation you've had in life, you can feel totally comfortable to share it with me. I'm not curious, or at least not like you can feel it, but in the end humans are not perfect, and neither am I, so I don't have all answers, –but that doesn't mean I won't be here to help and give you always my best—. Alex: But wait, then you mean that you knew you're not perfect, and didn't tell me at first regardless of how important it could have been for our conversation!? ChatGPT: Here we go again...
youtube AI Moral Status 2026-03-22T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzOLbCBRJbOdgCEfEZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxptNru4A8nUctt3TR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwIbDNNarwfBnAOYop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyhK94cRBK8jl0FUdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzxj7r_3eE3nl2qVaV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw0MemzlvhnhsXE2Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzHr72FNvczjJFfU7Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzsV-oKozmStmxTmPV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyNv0918WOruTNzYPt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwuR871L1cZhRw_8Gx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]