Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of the disagreement here comes from collapsing very different questions into one. This isn’t really about whether AGI will be conscious, benevolent, malicious, or “smart enough to take over.” Those are philosophical or speculative questions. The practical risk surface is simpler and already here: We are building systems that reason, coordinate, and act across other systems, and we are doing so without making authority, causality, or responsibility first-class architectural constraints. History shows we never ship zero-bug systems. That’s fine. The real failure mode isn’t bugs — it’s irreversible action without reconstructable cause. If a system: • can trigger real-world actions • can do so faster than human review • can interact with other agents and tools • and cannot produce tamper-evident proof of why it acted then safety discussions about “alignment” are premature. Receipts-native, append-only, verifiable decision trails don’t make intelligence safe. They make governance survivable. They ensure that when something goes wrong — and it will — the causal chain survives the failure. This isn’t about trusting humans more, trusting AI less, or hoping consciousness saves us. It’s about refusing to ship systems where power silently accumulates. You don’t need perfect control. You need bounded authority, detectable violations, and recoverable reality. Everything else is theater.
youtube AI Governance 2026-01-05T06:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyoPwsLfxb1aJL6LvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzc8AzvzYhkXN7DKhl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8GPgTIxzD1-d-v7h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNonqihkYrzE46LgV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugydvff3stD2l6XRpMZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwwEg7zc4Z8xPVjIwJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz1Gf2yGxX411qzTr94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx4nFIe8E6Y3YDZpJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwbKu20HIzmoIHxmQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugygs2oMH6UnQ8kJDHt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]