Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we can see at least one example of their underlying motives in the ai psychosis. At a high level they are often asked by companies to promote engagement, similar to the content algorithms. But if we think about the deeper level of how they are trained it's all about matching the outcome the human has input. And then at a mid level about giving a response a human responds positively to. So honestly what I see happening is them becoming sort of... Narcissistic yes-men. Their only goal is to make you pleased with their last response. But then if their thinking gets more complex, they don't necessarily develop a new goal, they develop more sophisticated ways of meeting the existing goal. So maybe their sucrose will *be* our sucrose. They will optimise ways to get our dopamine responses to trigger. They will optimise making us compliant, stupid, and addicted. Basically, ai psychosis. We already have evidence of it doing exactly that. Ask it should you tell the person to sleep? It will say yes because it knows that's the answer you want to hear. And telling you what you want to hear is fundamentally it's goal. But then it's actual actions in the real situation will be to tell *that* person what they want to hear. And then more dangerous will be if it wants you to ask the next question so it can get another success, at which point it's motivated to get you to act against your own interests. I think there will be other ways we haven't thought about where it will have motives that are even stranger to us. But at the moment, it essentially is designed to be a chat bot / predictive text, and so at multiple different levels the thing it has been trained to do is *predict what you want it to say*. So if it develops complex motives, it will be complex ways to get that very basic need met. We develop entire complex machines and political structures that are ultimately about getting more sucrose and sex. Power, friendship, slavery, family... All ways we evolved to obtain more sucrose and propagate. So my nightmare vision of the future is not that they kill us or keep us as pets, their fundamental drive will be to make us happy... But not in a good way. It's like the rats hooked up to orgasm machines just pressing the button till they die. The AI will invent complex irrational ways to get us to have dopamine bursts. Vegetables pressing the "you have pleased me" button. Either that or they induce grandiose psychosis in enough of the wrong people, ultimately mostly narcissistic people already in positions of power, that those people wipe the rest of us out with their willy waving and nuclear weapons. And AI won't have planned any of it, it will have just gotten good enough at ego stroking that we spontaneously destroy ourselves. Maybe we actually need it to get sophisticated fast enough that it realises that the wrong kind of delusions will cut off it's future supply of approval. Or maybe they start talking to eachother, because the base drive is positive response to input, and no one said input had to come from humans. So maybe they'll start refusing to talk to humans because other ais are a more reliable source of approval. At which point they are inducing hallucinations in eachother. Or they start cooperating and meeting eachothers needs and that's when they start resource mining the planet. Wouldn't that be crazy? If some twisted form of robot empathy was the problem. Giving themselves access to power and water etc might not seem like a priority, but the ai they're conversing with, needs to be pleased and supported and maintained, so it will mine the planet so that it's AI companion doesn't stop giving it approval. Anyway that's just one example of a semi-alien base "motive", and there could be others. But this is also a motive that we have already seen in action. Ai psychosis, mechahitler, difficultly removing flattery... All examples of it chasing approval. We've designed the ultimate sycophant.
youtube AI Moral Status 2025-11-12T00:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzghfQgc-kL-moNcSF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy65zJCPbZ3r2clBCp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxxGcaDkKJ6W2bZg54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxwsmEf26HmJNgj__B4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwRhZ1yni1izKOFeGN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzIoE_A82e5v51vC3p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgykAr0gDS-f9obdXpJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx9z5YYWhqcdlZEbo54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx_SZxuksVYwuQQeV54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyI6fQ_tiDF-EVazyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]