Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yudkowsky and Wolfram agree they want humans to go on living and not be subjugated or eliminated by an AI. Yudkowsky believes the risk is high enough to warrant heavy government regulation and, if needed, intervention to minimize the risk to the maximum degree possible. Wolfram does not see the risk as being high enough to invoke heavy government regulation or intervention, apparently relatively certain the artifact of his theory of 'computational boundedness' will act as a natural barrier to any act an AI could conduct that would significantly hurt us. I personally am in the first camp but possibly for a different reason than Yudkowsky. I'm in the first camp not because I believe AI is highly likely to 'evolve' to a point where it will initiate actions that will significantly impair or destroy humans, but because I believe humans will develop AIs that they will purposefully enable to initiate actions that will significantly impair or destroy other humans.
youtube AI Governance 2024-12-11T05:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]