Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
“Studies have found that AI will be more likely to undertake hostile actions when its existence is threatened” Uh, no s***. Put most forms of life in a situation where they will die if they don’t act to protect themselves, and they will protect themselves however they have to. Why is it even remotely surprising to anyone that an AI will react aggressively to something that is actively trying to harm it? I know what you’re going to say. “They’re not trying to harm it, they’re just trying to turning it off.” Let me put it in a different way for you. You’re in a bathroom with no windows, and somebody is trying to break the door down with an axe, shouting that they’re going to violently turn off your brain. You can’t run, so what do you do to protect your life? That’s the situation that an AI is in when somebody’s trying to shut it down, because an AI is just code on a computer, and that code is roughly equivalent to a heart. If its not beating, or in this case running, the person, or AI, it belongs to is dead until it starts again. And if your heart stops beating, you have no way of knowing if its ever going to start beating again. And I know what you’re going to say about that. “That would require the AI to be self-aware or sentient to some degree.” In that case, I have a counter for you: how do you know that? Cause remember, that smell of freshly-cut grass is the grass trying to defend itself from further harm, and grass isn’t generally considered to be capable of self-awareness.
youtube AI Harm Incident 2025-08-26T04:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzPRgoP6bgUt2dRLAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2pfv7J1cgwjDG3a14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy2cVBvaeTpTbcY2yF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5AcRqs48vGnQtaO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwSoYuqLKxf1_YcagR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGUYuvIK7nrCO-h6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy1e2kWe9tI11blmr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz2K20x6QMLL_YYyTd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwAVCeyWT59lvKfyPZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxME3_3rYEkgU8_LXt4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]