Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
programming a robot to act to harmful situation is not counted "feeling of pain" in my opinion. we my program robots to avoid harmful situation with programing them with an algorithm that followes a similar way to humans, but this does not mean that they actually "Feel" pain. also the way we feel pain is not out if natural law, in other words, the way we act to pain is just how we are programed to do, just like biological robots, but its not all, we also "Feel" pain. i never get why some humans are so obssesed with idea of artifical inteligence. i mean we dont need "artificial inteligence", right now we are using algorithms and robots in many ways but we dont need artificial "Inteligence" because thats useless, thats not going to do us any better than what current algorithms do, and if we consider the posibility of AI takeover, it will become obvious how meaningless is the idea of AI. (A Space Odyssey anyone ?) war on artifficial inteligence is similar to war on visual "Reality". its not about enjoying more in V"R" or be able to do everything we want in visual word, its the matter that we wont have any consciousness there, thus make the idea of VR absurd because we fundamentally dont exists in there, and there will be just a software that act like our mind . while mentioning bugs that maybe exist in visual word that makes it a place that we can't "live" in , or some some kind of "god" that runs the visual word and makes it hell to us. (maybe god is behind the AI or VR conspiracies cuase he wants to makes us suffer in a visual world he rules, HELL?) "Lorn - Anvil" Conscious living beings, being killed by unconscious robots will be the most "Evolutionary fail" ever. . .
youtube AI Moral Status 2017-02-23T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UghpANhjFOd_MXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Uggn77pCFMITQXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgibpbjtEpC3JHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiWvB3VJMTCOngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugh5aTkSvKD2SHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgirtS0ioNliBHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjQVxHcT2QFV3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UghhewUiQJ-ZJngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_Ugj8p84mG6TBN3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ughb3LUTreyc1ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]