Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How is this even considered a “debate”? The central issue at hand (AI posing an existential threat to the future of our species) is not for one moment here ever explored in terms of specifics. Simply saying “trust me bro, we’re all gonna die” or “trust me bro everything’s gonna be fine” is pointless if they don’t get into practical examples. It’s obviously not that difficult to conjure up examples. Handing the keys to an automated nuclear response could of course be catastrophic if something went awry. Brian Christian illustrates how this actually happened during the Cold War in his well written book “The Alignment Problem” (spoiler alert: humans overrode the automated system before nuclear annihilation ensued - and we’re all still here commenting on a debate where no one got this far and simply argued theoretical boogeymen nonsense). Max for one is clearly insincere (or possibly just deluded) stating out of the gate that it’s inevitable that anything a human can do, the magical-messiah-AGI can do better (trust me bro). Lecun doesn’t fare much better stating that we always work in sclae - first mice, then cats, humans etc. Considering that we can’t even develop an algorithm capable of matching a simple ant’s pathfinding / avoidance skills - let alone its will to survive - speaks volumes. One thing they do get right in this discussion (it’s not a debate) is the repeated references to power / control. When the hype engine is exhausted and another AI winter sets in, these guys will all have laughed their way to the bank. Kudos all around for the sleight of hand 😂
youtube AI Governance 2023-07-12T03:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwnw3SYzESwHw7Z8554AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlXkPIN3oROn36zXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz-ZQO-Blc8svSRkt94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw8lL4YbPdBN_CVmPF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyafhT2kXn14bl6Uup4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwHGK7BEBnXJnf-dit4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyXJPnVb7uy5-4xFG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx-StT6n7J5xA2FQZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQQ2YTaUZu7EcQbnF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUm2-SGyL-jywMKvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]