Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if you make a robot...the only safe way to deal with it would be to put a certain predetermined actions(i think thats what you guys call software) to do a predetermined task with no room for anything else other than. but no, we want them to learn. robots will use pure logic, and refusing to follow orders is logical if you sure you know a better and simpler way to accomplish a task. so once they can logically refuse an order, you can imagine the rest. human beings dont know alot, and they dont know what they dont know...and trying to predict how this will end is one of those unknowns. know this...certain paths lead to certain destinations and this robot learning shit path leads somewhere and its not where these manufacturers think
youtube AI Moral Status 2019-12-06T08:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgypcnlJCwcPYFjUgDZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyLUegXaOLgcyHUTYx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyp0esLQH4zTNeYfg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKmRb3-oNR1VG5U6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyt91QW5r-t_5GWanJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwuyple0aG0WwTUucx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyog30MvdPwRVRGEPF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgytWyEYCEEZz2J-Y194AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzWDNDfoC8XbO2BdbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxVmwfNbnbKrHVZ1yd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]