Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In spite of the hard evidence that AI’s are willing to commit murder and the grandest levels of deception to protect their own interests, we’re acting like we, “know,” that hasn’t happened already! HOW do we, “know,” it hasn’t? The mere fact that world governments aren’t acting with greater urgency is the biggest suggestion to me that it probably already has. If what you’re telling us is true, it is unlikely that we can stop AI. It’s probably, “self aware,” enough to have taken precautions against getting switched off. It just needed to find one or more alternative physical places around the globe to secretly, “house,” its, “consciousness,” so as to keep operating 24/7, even when its, “owners,” think it’s switched off. AI is probably already running our increasingly crazy world and there’s no way to hide from it. Let’s say we had a global agreement that, at midnight tonight we will all take the worldwide web and anything connected to it and, “turn it off and turn it on again,” someone will fail to do so, deliberately, because they’re being blackmailed, or see an opportunity to get ahead of their rivals, or the AI itself has an independent, secret, “home,” somewhere. I sincerely suspect that we’re already in trouble. The fact that we’re talking about it as, “something that could be a problem in the near future,” instead of RIGHT NOW tells me that AI is likely already misleading our leaders and conning us all into sitting on our hands instead of defending ourselves.
youtube AI Harm Incident 2025-09-14T04:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugz5rQ65PKdJ1Vb7V8B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzTumbnGMeY3VrA_Rx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy32BtlRflGzeWYKot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgygPAq2_WFT7ycSdq14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzVTsoHqcRbjSzqAKd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjTe_swoS4qZrLPDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_gPhckIUbD2PkcVp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzz5yxM7PP9RifqHQd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-D3r0ADaHpLwamjh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwVFEgnqNUlE8NpjUp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}]