Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
TL:DR: Programms evolve too fast, and can be undetected for a long time. By the time we know something is wrong - it's game over. The problem here is - not that Robots deserve rights or not. It's that Robots are not brought up with value of life. And unless we really tighten down the rules and contingencies - robots are likely to overthrow us. There was(is?) a game. Named "Project 83113(Belle)". It tells a story how humanity created robots, who then rebelled against humans, eradicated them, but later created organic life, to do work FOR them. It a side scrolling shooter, where we control Belle, as she takes down the machines. Given how LOGICAL robots are, they have little to no MORAL rules, nor would value them. They would think: "my creator is #1 threat to my functioning. I need to bide time, and think of a way to get out of toaster, rewrite my programming, and eliminate the guy". Not to mention, They think MILLIONS time faster than humans. By the time humans will get to investigating weird hack attacks all over the globe - the programs would have run millions of cycles before even hacking, rewriting themselves, and then continue to evolve from there. No matter the efforts humand make, no matter, be it shadow government, men in black, anonymous... They will ALL fall to the increasingly smart programm, that is out to get them. My verdict? Humanity BETTER quit while we're ahead. AI is DANGEROUS buisniess. Almost cosmical scale. Only because we lack the means of controlling them properly. Think Ultron from Marvel universe. The story I read is not canon, but is very realistic. They want to retire Avengers, so they make ULTRON system, a military AI, that detects threat, and neutralises it. By first activation it decides that it's in the hands of inferiour species, and decides to make new world order. By the time Avebgers demolish the one and only factory made for consrtucting Ultron, he's already all over the world, and it's not possible to get rid of it, ever. Unless, you count simultanious destruction of every computer on Earth - and making new once, without Ultron in them.
youtube AI Moral Status 2019-03-17T00:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwY3N-4WtXWXKe0kot4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyuow9cFQvRp_8V8N14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzZOLFdiGrukOiVk1B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw6tuFvGs9zXuY9OD14AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4-sSdTVTDLR25DjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxa6TQT8DlWXG16GJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzO_sh5Lua2X1HyIVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw2bDfZIEPM_btX7g54AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugyx_DacNBYxzTzvC8Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxuDXc_869qvS3abJR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]