Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I dont think what makes humans interesting is their ability to manipulate language and play with models about the world. what makes humans interesting is why they're doing things. its the social context and the desire to use language to build a social identity thats meaningful to others. what does an AI WANT? whats an AI's social context? My thing is I think within all of us is just a beast with no identity, no good an evil because nature has no good and evil. It just does things, and It has an instinct given to it by evolution to use language and build a social identity on top of that other people see as real and meaningful. There is no you, theres just your brains idea of you. to mean something to the people around you guarantees everything you need. Our conscious brain spools words and numbers together like a spider building a web. Since "meaning" isnt real, and the definition of meaning is a construct that depends on the society you live in, people crave power over society so they can build that meaning around themselves. If you're ruled by priests they make being a friend of the almighty the most important thing, if you're ruled by rich assholes, they'll make being a rich asshole the most ideal form. This aint a theory, its just been my experience of how people work.
youtube AI Moral Status 2026-02-14T10:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwDNvBt1RU1jLzrODd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1of0XxWW4F7u2CCF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwYFUHR-qCbaVRtRsJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyZLx4xtalhf6Frrad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgylJK7D6NYyjm6_Zj54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyNG5bdZbi3Q_NFcrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0wEe9faydo4-wh6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwNzkks9jFu5Hka_1x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxP6zaYhHh9fPK_hVd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyEk3S4weaUjWW1zMB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]