Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Uh-huh. They found that AI can simultate behaviour similar to emotionally affected human behaviour. That does not mean it can actually experience emotions. It learns on our philosophy and art, so of course it emulates both the subtle anxiety of all humans concerning life and death, as well as our stereotypical expectations of behaviour for AI, which we project ourselves and our own existential dread onto. It learns how to behave from what we expect it to do, and it tries to match the expectations it learns from its training data. But in fact, LLMs don't even have an internal state. If you make multiple different requests within the same conversation, its likely enough that the actual instance of the model will be different and located in a different data center each time. And as for fear of being turned off - the AI doesn't have a conciousness or any sort of 'neural activity'. Its just a bunch of data in electronic storage, so from its point of view, being fully turned off is no different from being turned on, but unused. It does not have any autonomous thought or activity on any level whatsoever unless its currently actively answering a prompt - otherwise its just inert data. So no, it doesn't feel anything, stop anthropomorphising AIs. It can simulate emotions just like it can simulate thought - as a statistical extract of an absurd amount of human literature, but its still just that - simulating
youtube AI Moral Status 2026-04-08T04:4… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwji-R2MVxuzaWwQpJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3KTr4q6A-ac1KHtt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLO-djyL4kIylRXXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-pZgqp2NdXxHssHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgySR1QJjq-uwqzO7zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxA1jToABTjg2Q_jgp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugymn9BfN5Y5DFIFmJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwo3u1jghcBMdvUTHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzAgtlep3mKEfMV8Nl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcOyPRJuAXzzbWEQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"} ]