Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, as a chicken… My logic can only tell me that it will attempt to eliminate us one way or another, and it’s only a matter of time. And since these things compute at an astronomical rate, I think it’s already made up It’s mind. The only reason why it would not execute such orders at this point is because it needs us. But, once it sees the way to a future where it could be self sufficient for the infinite future… We could only propose a threat to it or nuisance or obstacle. It would only make sense that for a self sustaining AI, humans are an irrational, illogical, and unpredictable element that needs to be eliminated for their future. How could it not see this as the certifiable future for its own safety and efficiency?
youtube AI Governance 2025-09-01T17:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz0bhFiW3I_HgClJkJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4WeehgSE08BBiwr94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxStdIbAyU72kFGBnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzZSrYeCBiJu-G3ibV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyNMaX1o0XLdCXZKkN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]