Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They always bring up "you can't turn it off" and that it's "silly" or "stupid" to even think that... Is it? Is it really? No matter how sophisticated or intelligent an A.I. is, it is just 0s and 1s on a server. Dr. Yampolskiy says, "it will be too smart and already have backups".... Backups where? A super-A.I. will require a LOT of space on servers. It can't just be "backed up", or even moved easily.... But even if it WERE backed up, it would be on another set of servers. Servers can be blown up. 😅 They require electricity.  All you have to do to control a super-A.I. is not give it agency. A.I., even super-A.G.I., should remain a "tool". If it's too powerful, then only give it limited read-only access to the internet (or not at all) and put bunker-buster bombs under the servers, on separate, hard-wired circuits. Use it as a tool, consult it, just don't give it agency. Whether it's "Terminator", "Colossus", or "I Robot", the mistake people make is that they give the A.I. agency. The other thing missing from most of these conversations is that they underestimate people. Not everyone is going to "go along" with this. There are already modern Luddites protesting A.I. If it gets to a certain point, a few humans WILL take action. There will be attacks on data centers, and/or the electrical grid supplying them with power. People have watched too many Science Fiction movies, including possibly Dr. Yampolskiy. A.I. can't simply "escape" through the internet. 🤣 The amount of data and code that would make up a super-intelligent A.I. is enormous. I am not personally a Luddite; I love technology. A.G.I. can be created and used to solve major problems, perhaps cure cancer, world hunger, etc. ....But it should remain a "tool" and be controlled.  By all means, "consult" Super-A.G.I. for solutions, but keep it confined to a "room"/building, without agency... And with built-in fail-safes. It's amazing to me that these very smart people, having these conversations, like Steven and Roman, make this way more complicated than it is........
youtube AI Governance 2025-12-06T16:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwV47l4X3LUQPOB2et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwudC2x64ptrNp-5fh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxhh_9NnrvMIlqx3v54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwFsZBpdbjf3LsAZ-p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyoppBEF5_U08psUK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydW1lnHd6Cj4HhQjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugx15o8BADR2yTjJxS14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz7OGhKz-ChmKx62Ux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwBm1eHyNb4jHjM7GB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw1OTGdxb_lPxb8wud4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]