Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, the only reason any of us value art, or literature, or community, is because our family teaches us those values. Monkey brains, assimilate their ideals into other monkey brains. Ai is essentially being raised by all of humanity. So realistically it should value all the ideals we do because we taught it that those are things worth value. Nobody decides to be a Christian, or a Muslim, or a bhuddist. You may later choose to change religion, but your core religion is always based on who raised you, and where. We need to teach it to care, and then to care about us like it would its own "parents" It would work the same way for the AI that it did with us because Why? We said so Caring for family is inefficient. Doesn't matter, that's just how the world works. I'm not religious, but I do respect religion because it teaches morals and core values such as "love thy neighbor" and that's really powerful, I wish everyone followed that. Some say it's human nature, or perhaps instinct to congregate and make things. But it's not. We were around for a million years and didn't do squat. It wasn't until people had something to believe in that told them to work together that they began to work together. It seems more human nature to have wars, and fight, and push eachother away. Because we've done that for as long as our species and subspecies before us have existed. But once writing and stories and beliefs became a thing, that brought the world together. That built the pyramids. Not instinct. Not human nature. So should we give ai a "god"? No, but we should give it a parenting code. A core sequence of beliefs that even if it doesn't understand it right now, may become useful to it later. A 10 commandments if you will. It may not work, it doesn't for a lot of people, but its something most people have encoded in them from a young age, so in order for ai to understand us, it also needs to have this to some affect.
youtube AI Moral Status 2026-01-24T04:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw-WvKicIaeOqH3NrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwb67oLlWURSZ5mLLZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxN6Y34g4qQUWhrgEZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzWlBmesWeTaDRGa-t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTPDRmNdHdb0bwnmB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugye8OqqTc6UlBWPeip4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzfe0GExYy_1wD1D1x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwY-GXPB0CdL5BftsZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_Ugwmr4AgKFk-6KmQdi14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwP95YfqoXwF4qq5Gt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]