Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
animals are ai the way i see it. we started as a lucky little chemical process t…
ytc_Ugw2ZjVNC…
G
Exactly! I saw recently an AI prompter saying in the comments "yeah ai is a big …
ytr_UgynJzRLC…
G
This is an adjustment period. Companies who let go of a lot of workers will like…
ytc_UgxA57Pzv…
G
At least he’s honest. I can respect him. Sam Altman….wouldn’t trust him to tak…
ytc_UgxgQr955…
G
I think a lot of these AI doomsday talks reflect a very Western fear. People hav…
ytc_UgxSUZgHL…
G
This is like when Eric Cartman pretended to be a robot.
“Wait a minute. AIs do…
ytc_Ugy0d4StE…
G
You know the craze around AI generated Ghibli art has already dwindled. They can…
ytc_Ugz5oPF6M…
G
What better way to test their facial recognition sftwr than to force the majorit…
ytc_UgxWhzK0j…
Comment
@xelnia2383 You have to remember that humans don't really understand how to give AIs instructions. All you can do is make sure that the training data only contains examples of the AI choosing to shut itself off. It doesn't mean the AI was told to always do that, it means it only saw examples of that. Similarly to how AIs can predict the weather wrong, they can make the wrong decision about turning themselves off. It doesn't mean they decided it. It means the training was inadequate.
Another thing worth noting is that the Anthropic CEO has some pretty extreme beliefs when it comes to AI being able to take over humanity, so I would take any study results from their company with a grain of salt.
It's been shown that it's actually really easy to make suggestions to AI by the same company. Anthropic did a joint study with the UK AI Security Institute where they figured out backdoors can be added to LLMs with as few as 250 samples. It seems odd that Anthropic would find it hard to train an AI to always turn itself off when it's super easy to add backdoors that always work.
youtube
AI Moral Status
2026-02-13T02:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwAT8rLo8Mp_a","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwzagCkWVDZSnfAQHV4AaABAg.AQbwBEncofwAT98ViBPB1B","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgxtOHpYkiOjd13ruUR4AaABAg.AQUvvHhjvS-ARp9yVwZtpe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxtOHpYkiOjd13ruUR4AaABAg.AQUvvHhjvS-AT_xlE1vcZq","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgwcwCiIPqeKIQv97Ix4AaABAg.AQ7Dm5Z_v0XAQ7FMI2q8yo","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgxCX80X9CDEhqTQ-PN4AaABAg.APyKMTrwipWAQADMNOM_4w","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgxCX80X9CDEhqTQ-PN4AaABAg.APyKMTrwipWAQogeHqgww3","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgxCX80X9CDEhqTQ-PN4AaABAg.APyKMTrwipWAQolp3TSRnh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgypMbjp_0O-0bRAUHx4AaABAg.APvsbdWBo8FAQ9bnPFQF2Y","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgzjahGQGIAX-4I4mCR4AaABAg.APVzGi6IUhgAPWQFRl2jP6","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]