Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI monitoring AI is probably the only way we can possibly accomplish alignment. If we're really talking about superintelligence that goes way way beyond human thinking, it would be able to, if it so desired, cook up strategies to get around any limitations we placed on it that we would never think of in a million years. And what would be going on inside its head would be so complex that the idea that we could control what it desires (long term) doesn't make much sense to me. So what do we tend to do to solve the problem of individual agents having too much power? We split that power into a bunch of individuals instead and force them to talk to each other until everyone agrees. We would also likely instruct them to make their conversations public and slow it down for us so people can understand why decisions were made. It wouldn't be AI 1 and AI 2, and if they disagree with each other they try to blow each other up or something, it'd probably look like a digital congress. Only with agents more focused on solving problems than getting re-elected. That way, if one agent turned malignant, there'd be a bunch of other ones around it that haven't and could contain it before it got out of hand. Is it perfect? Nah, but that's just how it goes; the future is never guaranteed, AI or no.
youtube AI Governance 2026-04-19T06:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz5jbuha1-I164DsJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw-sTNEXSrXCXAmHhR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwmwVuds5i3X6FnBVd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzyQbgxbRFDhzXVM5J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz83HIF1EedUc9bDT14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgweA8wOWhlAXOPlmvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxmjNaTnTxW4OixeG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgxzNDrYxOiF8NW5S614AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgzIVmJM4oa1gDXILBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwcWBmMFEHhRinAEWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]