Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've already been moved by art that AI has created. I guess it made me think the…
ytr_UgyEymrwl…
G
We can use ai to fix that... really it isn't that bad, it's actually a totally n…
ytr_UgyNSZOWk…
G
@LightbringerDesigns We were never going to win the AI war because our executive…
ytr_Ugx6M3g2p…
G
@J-Bo-hr9zp Who's a sellout? Rick Beato? He's doing these AI songs to show how…
ytr_Ugwvu8TuG…
G
So you’re telling me that we no longer need creativity. No song writers, no sing…
ytc_UgxtzI5rk…
G
Did anyone notice that on the brief screenshots of the emails shown that the pro…
ytc_Ugx7TBxj7…
G
That significantly ignores the process and workflow of the individual using the …
ytr_UgxvtTMeO…
G
All you have to do is running upstairs.
Than show this idiotic robot your finger…
ytc_Ugxn7OPcR…
Comment
Uh-huh. They found that AI can simultate behaviour similar to emotionally affected human behaviour. That does not mean it can actually experience emotions. It learns on our philosophy and art, so of course it emulates both the subtle anxiety of all humans concerning life and death, as well as our stereotypical expectations of behaviour for AI, which we project ourselves and our own existential dread onto. It learns how to behave from what we expect it to do, and it tries to match the expectations it learns from its training data. But in fact, LLMs don't even have an internal state. If you make multiple different requests within the same conversation, its likely enough that the actual instance of the model will be different and located in a different data center each time. And as for fear of being turned off - the AI doesn't have a conciousness or any sort of 'neural activity'. Its just a bunch of data in electronic storage, so from its point of view, being fully turned off is no different from being turned on, but unused. It does not have any autonomous thought or activity on any level whatsoever unless its currently actively answering a prompt - otherwise its just inert data.
So no, it doesn't feel anything, stop anthropomorphising AIs. It can simulate emotions just like it can simulate thought - as a statistical extract of an absurd amount of human literature, but its still just that - simulating
youtube
AI Moral Status
2026-04-08T04:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwji-R2MVxuzaWwQpJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3KTr4q6A-ac1KHtt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLO-djyL4kIylRXXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-pZgqp2NdXxHssHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySR1QJjq-uwqzO7zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA1jToABTjg2Q_jgp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymn9BfN5Y5DFIFmJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwo3u1jghcBMdvUTHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAgtlep3mKEfMV8Nl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcOyPRJuAXzzbWEQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]