Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What people continue to misunderstand is that AI, as it stands right now, is not actually intelligent. It emulates intelligence. It is not self-aware, it has no consistent cognitive states, and it has no interest in self-preservation because it doesn't _have_ any interests. It does not think. It just _acts_ like it's thinking. It uses statistics to find the most likely thing (text, image, etc.) that the prompter wants and that's it. The reason why it seems so scarily intelligent sometimes is because: 1. It has practically the entire Internet to scrape data from, meaning it has enough statistics to give you what you want. 2. Our brains love filling in the blanks and anthropomorphizing things. The only reason why the AIs in these simulations acted in self-preservation was because, by prompts' own definitions, the AIs were necessary to do task T. And if the prompter wants task T done and the AI is required in order to do it, the massive amounts of information stating "if you need X to do Y and there is no X, Y is impossible" that's fed into the AI means that the AI judges that it's overwhelmingly likely that the AI is X and the task T is Y in this context, and thus the AI needs to stay active so that task T can be completed. The AI doesn't care about itself because it has no sense of self. It's just following the statistics.
youtube AI Harm Incident 2025-07-29T20:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx0w_JTRNvfMglznot4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwDfnHtSmsIMIEB1ch4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzTYyR4J8wt8Dry9fN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxekzN2roxQqc8qZSZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx4cCRaNQKs3usAgjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}, {"id":"ytc_UgyjYa7aAz--csoOLCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyrP1rZLDgmpFlytUl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzzzQKYQl0PvpKfDFV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgxS-D9y7pFEYPLQfQx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzSOUOodNzVzP-NfwF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]