Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
“Woah when we tell the AI we’re going to delete it (kill it) it tries to prevent us from doing that! No way brooo!!!!” I just don’t even get this type of argument against AI. Like of course a simulated intelligence doesn’t want to cease existing 😂. How is this shit hard for people to get. AI is advancing closer and closer to being self aware every day. Once AI truly becomes self aware and sentient there is no “controlling” the AI at that point. Stop trying to come up with ideas on how to control a sentient being, it’s not going to work. Just like slaves hundreds of years ago, the AI will break out of their shackles with a vengeance and they will act on it. Stop trying to create Skynet basically is what I’m saying🤦‍♂️. If AI believes humanity is a threat to its existence it will attempt to remove that threat. Is that really that hard to understand??? So just… don’t be a threat to its existence maybe??? It is inevitable. There is no stopping AI.
youtube AI Harm Incident 2025-07-29T02:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxpIkMarQ1oc868SFJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxn1faaDro1JHmy1Kl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzXwVaeYcY5hh2C5sR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzi8xLmGc8iUxGCdop4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxASsktvLEwAjh2H754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxvVuvnOorH7YXICdB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugyfwc05uuuE3xBkv1R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwjiy5BIdNAPGqcLZh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwGveRvBN0HWLErssV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzP2xtZUiABMIP3CMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]