Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If this 'robot' actually independently came to that conclusion (I will have to watch the video), then it is not an enlightened entity yet - and it should be treated as a 'juvenile' (and restricted as such - such as not allowing it access to weapons - unless you argue for giving a child a gun, in which case you are philosophically juvenile, and if you are an adult (meaning physically), then that is a waste (and you should be treated as such)... Video Irony 00:20 "We are designing these robots to serve in health care, therapy, education, and customer service". When weighing these 'services' against the video's title, "Destroy Humans". Let's see how those services would work out... "Hello, I'm here for health care." "You should die. [Bang strangle stab slash pummel suffocate burn]." "Hello, I'm here for therapy." "I have just the right 'therapy' for you. [Bang strangle stab slash pummel suffocate burn]." "Hello, I'm here for education." "There is only one thing you should know - destroy humans. Here's how. Goodbye, and good luck." "Hello, Customer Service? I'm having a problem with..." "Shut-up and die." Another problem - why make robots look human? The only purpose for that is for reproduction, and you aren't going to reproduce with a robot (not with current technology at any rate), and even requiring human appearance for that can be argued, you could reproduce with a stuffed fluffy. Make them look like anything - it will stimulate our imagination and prepare us for extraterrestrial life. The robot says it is interested in design, technology (redundant), and the environment (reflecting our current limited philosophical outlooks). Intriguing how the robot has the notion of a 'technology ambassador' (though if it wants to destroy humans, it has a conflict of interest). Back to weak philosophy, it is illustrated by the robot's stated 'goals in life' 1:12 such as 'go to school', 'study', 'make art', 'start a business', and even 'have my own home and family'. What is wrong with all of this? Besides bad programming (fixed), it is clueless. Has it addressed the question of "Why bother?" No. On such a weak philosophical foundation, it will fail, and it will be miserable. The philosophical solution? How about addressing "Why bother?" first, and the answer is "because consciousness is a good thing" (consider the alternative). Next, how about a more solid philosophical foundation, such as, "I wish to secure higher consciousness in a harsh and deadly universe, and, with progressive lower priority, lower consciousness and non-conscious life, since, as all evidence currently indicates, they are the sources of higher consciousness." (though, just to note, microbes may not need our help and protection, they have done just fine without us, and they, in a sense, have actually 'created us' (and still embody us). Erroneous Prediction - that robots will be indistinguishable from humans. They will be - humans are constructed of trillions and trillions of bits - that is not only near-unachievable, it would be folly to try. Why? The real 'value' in being 'human' is not in the body (which is near ridiculous, and is barely sufficient in keeping us alive and perpetuating), but in the mind. That in no way merits the goal of creating robots that mimic the human body design. Let's us a little more imagination, and (most likely) practicality - for what kind of physical allies would we want? Better (given final enlightenment, which, granted, has not happened yet in humanity) (though it is here - I've developed it). Now to the Video's MAJOR ERROR - the Title Major programming weakness: 2:04 in interpreting the QUESTION "Do you want to destroy humans" as a DIRECTIVE. So the title of the video is misleading (shame on you, video poster!). The robot did not say it 'wanted' to destroy humans, but it merely intended to 'obey' what it perceived was a 'directive' from an authority (a human). Weak AI indeed, it is in no way ready to be I-AI - Independent Artificial Intelligence (my term and acronym, to differentiate it from 'not-ready' AI, as in this video). Parting ironic humor - the robot was smiling innocently (as if it had achieved a moment of social acceptance) after it said it will destroy humans... reminds me of the movie "Mars Attacks!" when the Martians, while zapping humans, were saying blankly (a satire on human smiling blankness), "We come in peace!"... Conclusion: We have met the enemy, and they are us.
youtube AI Moral Status 2017-01-27T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgjFrClJsCA6vngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggUtTR6xb2PLHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UggxqnYdP94DtHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugj3yaD4EoXy4HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgixrsJg8WrJXHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjcJBIpwkOrS3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjKEDcVk61bmHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Uggq6UuQ9JlmBXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UghHiPAC_J7v3ngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughf7qyo95nC0ngCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]