Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the ways you can stop the SkyNet scenario is recognizing that it doesn't matter whether an AI is basically omniscient, if it has no hands or feet, it can't get out of its box. It has to rely solely on manipulating us, to change the environment. Connect it to anything like a robot body though? Well, then you might have hell to pay. I think if we isolate AI from any ability to control its environment however, that we should be safe. And no, a super intelligent A.I can't hack through a power line. Another thing you could do is have contingency mechanisms that just cut power to the system that are in places the AI has no control over, and has no way to subvert. In this sense you can keep it leashed or at least make it recognize that humans pose a credible threat (which, hilariously, might be the sole reason it even cared in the first place).
youtube AI Governance 2025-08-30T00:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwfV_WbxdQNpgHFDzh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy3n4mmo9K4_1RRnrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1BRIGKZG_IpvBJ014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxU2iUBO6bXmuYKkBF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzUlUfWtbNN9qPVO2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]