Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So. I really needed to put a coment down here, and, being straightforward, it's because this video is complete bullshit. And i will speak a lot from now forward, but i really ask anyone that sees this comment to read it if you believe what the video says. I am a student of TI and i do have basic comphreension of the subject, and i can affirmate that, what this video is doing here is manipulating the information to interpret it in a way it seems dangerous, when everything that's happening is actually not. And already, sorry for my english, i'm not really American, British or any other country that does speak english, but i'll do my absolute best at making it the most understandable for everyone watching the video. Any scientist that says that AI have a chance of killing humans in the future is also lying, so please, do never trust in this type of sensationalist "professional" because they are more worried of calling attention and having money than to actually spreading awareness. For everyone that believe what this channel said in this video, let me try to bring some light to you and ease your concerns. The way in wich this video comments on the subject is highly sensacionalist, it's made to make money on internet by inducing fear in people, but worry not, AI can't be a huge danger for humans. First of all, the argument about AI's self preservation trought malicious actions happens not because AIs are bad, actually, AIs are trained with a giant amount of text knowledge that was written by humans, and guess what? Most humans actually want to preserve themselves and will desire to have malicious intent in order to preserve itself, so this is not an AI malice, but rather, it's just the AI replicating what happened in the text that was used to train it. Second thing, this video does a very bad thing that is, take something that is a real concern to the cientists and sensationalize it to make it look something dangerous. First of all, yes, AI can have emergent behavior, but that does happen for a couple of reasons. When you have an AI, it is running by a code that has an objective, and the objective is to realize with most efficiency what is asked by it's prompt. In the minute 4:45 the narrator says that the AI is ignoring the prompt that asked it to not harm humans, and the AI ignored it. This can happen because the AI hallucinates, and this hallucination is a technical thing AI scientists are working to minimize on it's behavior. And AI can also "forget" a rule that you have in your prompt, because the AI always will have the focus of most efficiently realizing the asked action, and this means it will focus on the action of what you're asking. So, for an example, if you ask the AI to do your homework, they will do it. If you ask the Ai to do your homework but use a specific formula to make the equations, it will still do your homework using the formula as you asked. But if your propt becomes too complex, and you ask the AI to realize your homework using specific formula, writting the final answer in a certain way, without rounding the results, answering it by extense, using a specific pattern of speech, and keep increasing the rules on your prompt, it comes to a moment where the AI will ignore some of the rules for considering them inneficient on realizing the main objective, finishing your homework! The problem on the way that the video says it, is that the presenter does say it in a way we understand that when the AI is going against it's prompt, the AI is breaking a major rule. But we have to consider that the prompt is NEVER one of the most important rules for the AI, for this exact reason, the AI can hallucinate and give wrong information, there is even a warning on chatGPT's app saying the AI can spread false information, telling you to check the facts, and that's the reason. We all need to understand that the only trustful defence we have against dangerous behavior by the AI is the security guideline, because the guideline is implemented in the source code of the AI, and any AI can't change it's own source code. While the Prompt is like a small fence the AI can jump over if it thinks it need to, the guideline is a giant infinite wall that completely barricades the AI from going on, and the AI can't change it's own guidelines. In a situation where the AI wants to preserve itself, if we use a prompt to ask it to not kill the human, it will most times still kill the human, because the AI understand it can't achieve the main task (completing what you asked) without breaking this minor rule. It's like if you ask chatGPT to calculate 2+2, but then say it can't answer four. The rule conflict with the prompt's main command, so it will give higher priority to completing the command, and will answer you 4 anyway. But if it was a guideline rule to not kill the human, THEN the AI would not do it, because it's a wall it can't go trought. So, basically, a AI is not a human. It does not have counciousness, it can't think, and AIs do not "know" anything, they can't. Because they are mathematical models that answer you based on percentages that it calculate as best answers for each individual question. If you ask it to sum up 2+2, it will answer four because in it's mathematical model, four is considered an answer with a greater "score", what basically means it's more presize. While it will avoid answring five because it would be a low score answer. And if you worry that AI can try to kill humans, and we won't be able to do anything because it will be protected on the internet, remember: Internet is a bunch of machinery scattered around all the world storing data in what we call servers, and, if any AI is risking human extinction, a quick power shortcut would easily stop everything, so yes, we are a lot closer of a third world war than to actually get killed by machines. Thank you if you read it until now <3 !!!
youtube AI Harm Incident 2025-08-28T18:1… ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxIwPDKuZMlD33Yqx54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxPilGEVP87NBDUvt54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyEpWzJPB-KEUlMDRF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyYRzgUyZRTb0C0Lxl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwD6dpPiPt3WGF0myx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgykgX4PemVOIz4H11N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzohnI-Ba8Z3F8XkKN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzvermPXbu2L9aGATl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyBSvgPBPf4g-NEfkl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzQV1QgAimpdbk8Y3l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"} ]