Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just being a devil's advocate... how about we just... don't make them? So we don't have to burden ourselves with the ethical questions? If they had rights, would they fight in wars? What if only them fought in wars? Would it then just be two or more countries playing a strategy game? Or imagine a hacker getting into your robot house maid. I mean they could spy on you and make the robot want to kill you or malfunction in a way that would do so... ever played watch dogs/2? Maybe because we can do something doesn't mean we should. Isn't there a point where it's laziness over connivence as well? Why can't I make my own damn toast or look up my own recipes or stock and keep track of the food in my own fridge? Maybe I'm paranoid and foreseeing a portal or westworld or iRobot type of scenario? Is my argument founded in meaningful opinion to you internet?
youtube AI Moral Status 2017-04-23T02:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghZxim60h8djXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjWFrifyXFxLngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg6_68H1uxBc3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggpPoRogRJoJngCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgiDTskh2rn2yHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjGaC_PeYYcrXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugg16N0dkIPH9XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgizKQfBDOEQFXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugg9hqGfomYCEngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgglqXCxOme6MXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]