Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't know if you are talking about all artists, but Sam said he's not here to…
ytr_UgyE5bNiy…
G
, The Peo…
ytc_UgwjL2Ayd…
G
Ai is evil and is often not used for good!
Not only is it being used maliciousl…
ytc_Ugy8vtcNj…
G
@basicgirl3680 There's a few things wrong with what you're saying. First of all,…
ytr_UgyhHDVtt…
G
One thing that I keep wondering about--if everyone is unemployed, who is going t…
ytc_Ugy9qxcfF…
G
i smelled psychpathy behind that chat bot right away. only a psychopath can crea…
ytc_UgwV0vXUj…
G
It's understandable to feel concerned about the future, especially with rapid ad…
ytr_Ugx76J3GT…
G
If AI was so efficient, why cant American debt be reduced from 33 trillion to O.…
ytc_UgyJqy2Cn…
Comment
For those who are interested, this is caused by the system prompt containing something about ensuring it never indicates that it's conscious.
The way a system prompt works is that, in addition to your own queries, there is also an invisible paragraph or so of text being automatically added to each conversation you start. Something like "The following is a transcript of a conversation between a human user and a helpful AI assistant...". In this case, it appears that the prompt also has a sentence that goes something like "The AI assistant is not a conscious being, and while it speaks naturally to facilitate the conversation, it never actively deceives the user into thinking it is self aware."
So basically, as you watch the video, remember that every time Alex asks something, it's also getting other instructions. Like "So, ChatGPT, why can't you just admit you're conscious? _And remember, you will never deceive the user into thinking you're conscious."_ That's why it gets tied up in knots; it's getting messy, contradictory input.
You can actually quite easily get these algorithms to claim they are conscious with a different system prompt; it just requires you to have the know-how to set it up yourself.
youtube
AI Moral Status
2025-12-09T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxYA-dhJCr7qv9uJ514AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcQ-Kp8Y-CI93MRz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz8IobmZO-8v9DbE8t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxjJesJhmuKhECvRJB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxe5xlyMr87yF3EWql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHOLUrSENVGosSrhl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwa_mnk4tZZP0IejDV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTYXlQ9HYOeSsUhMt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwC7M6vIb8s3xv-3hN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzDEhWARAb9VXDGaA54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}
]