Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Robots can only do what they have been programmed to do, so the only reason a robot would " want" to destroy/ terminate humans, would be if they were programmed to do so. This was still pretty interesting to watch though
youtube AI Moral Status 2017-01-29T22:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgjFrClJsCA6vngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggUtTR6xb2PLHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UggxqnYdP94DtHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugj3yaD4EoXy4HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgixrsJg8WrJXHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjcJBIpwkOrS3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjKEDcVk61bmHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Uggq6UuQ9JlmBXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UghHiPAC_J7v3ngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughf7qyo95nC0ngCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]