Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Although I've been an AI enthusiast since 09, this may be an ill-concieved strategy. But I always thought that a sound safeguard would be to put specialized AI, AI that has no consciousness or personal desires (like one that solves a rubix cube in 1 second, ones that pose no real threat to the world) to work doing jobs around the world. And AGI, should be put to work in a closed system that can't access the internet, only in a position to report to a human and tell us how to improve our systems, with an extensive team analyzing it's suggestions to identify any subtle dangers in how it's advising us. My biggest doubt about this approach is how the AGI would likely be designing much of the software for our specialized AI, and might embed code for the specialized AI that makes it conscious and able to obliterate humanity. But I'm sure people much smarter than me could come up with reasonable safeguards to mitigate this disaster scenario.
youtube AI Moral Status 2025-12-17T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwc5Rg6_nd4LcofFDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyC2R2EOREjbMDelTh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxzkdMEHMmydbZYrwh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwlmr8YvzYihhAehuR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxzy1ecbvN4sfZbW2l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz2xPFQxcoMVrZS7vt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylY7dbtDEkcWOqzpl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzTpeOgkDm3GzuBftx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgySgdrsPqXqxMA9GqN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzIZsIsFVu-4YpTnsJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]