Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea of machines becoming completely autonomous and gaining self awareness is always interesting because people usually fall into two different categories when presented with this topic: sympathetic and non-sympathetic. The sympathetic side of this discussion puts forth that "once machines gain our level of conscience, or greater, what then? Will they help us like we designed or will they wipe us out for centuries of slavery, offence, murder?" These questions stem from our view of ourselves as we look back upon our own history to see things like slavery, torture, murder, genocide, etc. The sympathetic side essentially, with or without knowing it, puts themselves in place of these fictitious super computers and asks "what would I feel if I was subject to that?" I would consider myself less sympathetic to these mechanical minds because I see that between the computer I'm using and the mega processors being developed, it's all man made. One big reason we question the morality of using animals for various purposes, aside from the sympathy explained above, is because we've been sharing the earth with them as long as we've existed. They can also express what we perceive as emotion. You're usually able to tell when a dog is hurt or happy. With a machine, robot, AI, behavior has to be built in the hardware and programmed in the software. It's all man-made. If you come across a machine that acts human, that appears to have feelings, it will be because a human programmed it that way. If you come across a robot that wishes destruction upon humanity, it will be because a human, somehow, taught it that hatred. If Skynet is to rise, it will be no beings fault but our own. Machines are nothing but tools, and up until humanity is gone and robots are tasked with carrying on human society, they will never truly have rights, nor should they.
youtube AI Moral Status 2017-02-23T14:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiLDZDsluuX7ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghO27xPtF4OL3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgiyMwZ_7WU5mHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh-nIhLVlynuHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugh6GzVlcqfQxHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Uggd7HuqJgAx-XgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgiVAEnmcJsth3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgjS4PQpHaKB33gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgjZof-spcqFxngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UggrO82HB4K0HHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]