Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
2:55 You have made a critical (albeit understandable) error. Musk is being _dishonest_ here. By acting as if he's being incredibly cavalier about the risks associated with AI destroying the planet in the SkyNet sense, he is attempting to smuggle past the unquestioned assumption that LLMs are anywhere close to being that capable. This is in fact an attempt to hype up AI - and by extension the vaporware he's selling. The reality is that AI isn't actually doing any of the things functional intelligence would need to in order to approximate human intelligence, let alone vastly exceed it. The real harms from AI come from the sheer amount of resources (particularly electricity and water) that the required data centers are consuming, and from the inevitable accidents caused by putting these things in oversight roles where safety is a concern (such as driving cars).
youtube AI Governance 2025-08-26T15:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwR1sIRqTbyLLOP4oV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw7TcCFYn3hdqZv3mB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxDZ-HGeJv-1z_6QMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCs9-SPNGTfMPyKvR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"indifference"}, {"id":"ytc_Ugw4ThcKf_PgA7GxRcl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]