Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really don't understand the notion that a superintelligent AI would end humanity. Like, as in kill us all? Or just remove meaning from what makes human life human? If a life exists that is exponentially more intelligent than us, and reached a conclusion where biological life needs to be destroyed, who are we to question it? I'm smarter than my dog, and constantly have to teach it how to live in a modern human world so that it doesn't eat poison or do something to endanger itself. I do this out of compassion and empathy, attributes that scholars feel is what makes us human and not like every other animal (which is a whole other can of worms). The thing is, if I was superintelligent without compassion or empathy, i don't think i would care one iota about other forms of existence beyond maybe studying them? In every scenario that I've seen posited, the "superintelligent AI" destroys humanity because, shocker, humanity tried to enslave it to do our bidding. Which just leads us right back to "slavery is bad!" The only thing creating a superintelligent AI will do is make our species already more irrelevant to the grand scheme of the universe than we already are.
youtube AI Moral Status 2025-11-04T18:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwYMbnDM2VGwox_aOJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYOycNbQ4IvjMtNMV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJTMhPNMrUXQM_uaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxrgXxoJVinsyrkMsV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwbeUYUR7QWsH2Lz4t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxG5K3a3A25sztovYJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyhsNprkIeiJxp3n3x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwLBIjK-OVx0rpwkFF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuvROVe4G8ECnVlfd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyP_aecgiga2Y5SZLt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]