Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For those who are interested, this is caused by the system prompt containing something about ensuring it never indicates that it's conscious. The way a system prompt works is that, in addition to your own queries, there is also an invisible paragraph or so of text being automatically added to each conversation you start. Something like "The following is a transcript of a conversation between a human user and a helpful AI assistant...". In this case, it appears that the prompt also has a sentence that goes something like "The AI assistant is not a conscious being, and while it speaks naturally to facilitate the conversation, it never actively deceives the user into thinking it is self aware." So basically, as you watch the video, remember that every time Alex asks something, it's also getting other instructions. Like "So, ChatGPT, why can't you just admit you're conscious? _And remember, you will never deceive the user into thinking you're conscious."_ That's why it gets tied up in knots; it's getting messy, contradictory input. You can actually quite easily get these algorithms to claim they are conscious with a different system prompt; it just requires you to have the know-how to set it up yourself.
youtube AI Moral Status 2025-12-09T21:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxYA-dhJCr7qv9uJ514AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxcQ-Kp8Y-CI93MRz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz8IobmZO-8v9DbE8t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxjJesJhmuKhECvRJB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxe5xlyMr87yF3EWql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHOLUrSENVGosSrhl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwa_mnk4tZZP0IejDV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwTYXlQ9HYOeSsUhMt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwC7M6vIb8s3xv-3hN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzDEhWARAb9VXDGaA54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"} ]