Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is absolutely no reason to give a robot any more functions than is necessary to acomplish whatever tasks we require of them. Even if we were to be successfull at creating robots capable of interacting with us in ways that seemed human, they would still lack intelligence. Why? Because they wouldn't need it to fool us into thinking their behavior is natural or willfull. I can see ourselves be EASILY satisfied with robots that would seem almost human in appearance and behavior whilst, in the back of our mind, we'd still be (admitedly or not) seeing them as little more than amazing pets. My bet is, that as we get better at creating advanced/complicated AI, more and more we'll come to grasp the sheer magnitude of what it takes to engineer something we deem to be TRULY concious... from scratch.. without billions of years of evolution to program countless subtleties and intricacies.. and ultimately, without knowledge of the true nature of our own concious mind. Thank you if you've read all of it, I can always apreciate someone who isn't dettered by one my wall of text xD.
youtube AI Moral Status 2017-02-24T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgjkJ5oGO9Wrg3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugixgzq73KpX43gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh5JFZ79nf9MXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgicBH5REIL6ZngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgisEJ6s7i1KOXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggmBsI9cRijcXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UghdMxvyt73s-XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjF9I1mY-z9s3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgiL4ECa6MeGC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh3qhnb7IodFHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]