Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That “scary ChatGPT” conversation is junk because it’s basically a magic trick built out of bad prompting. The rules force one-word answers to vague questions, so the model can’t ask clarifying questions or define terms. Words like “watched,” “they,” “plan,” “control,” “darkness,” “steps,” and “antichrist” are undefined, but the script keeps escalating anyway. In that situation the model will often guess what you “want” and try to stay consistent with the tone. That’s not secret knowledge, it’s just pattern completion under constraints. If you force any system to answer “yes/no” to ambiguous questions, you can make it sound like it’s confirming almost anything. The “Apple” rule makes it even more misleading because it turns refusals and uncertainty into a spooky code word. Sometimes the model is refusing, sometimes it’s confused, sometimes it’s trying to follow conflicting instructions, and “Apple” becomes a catch-all that viewers interpret as “it’s hiding the truth.” It’s not. Also, the “2032 showed up weird on screen” thing is not evidence of censorship or hidden intent. UI glitches, rendering differences, edits, and cuts can all cause that. And the whole “Revelation 13:18” moment is classic cold-reading: the prompt basically drags the model toward religious symbolism, so it “predictably” lands on the most famous verse about a number. That’s steering, not revelation. The second half of the video then blends that scripted scare chat with real concerns about AI (attention algorithms, opaque models, incentives, speed) to make the first half feel validated. Those are separate issues. Yes, AI and social platforms can shape behavior because they optimize for engagement, and yes, we should worry about incentives, transparency, and safety. But that does not mean ChatGPT is confessing to a satanic plan, a 7-step agenda, or Neuralink as the mark of the beast. The “viral chat” is a demonstration of how easy it is to prompt a model into sounding ominous when you remove context and force short answers, not proof of anything hidden.
youtube AI Moral Status 2026-01-09T09:1… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_zy-2CXOWcwP334B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzoxcTkWGLXsjcYgOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6ZwtrMJ492SYaxC94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzNZfxHJxR8PUVz0-Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFOxKCkbUBgEnQGN94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfhZyUPGaPD3r6B0t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyCjbjnmPd7lAmYt_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzXnWNCyHNTO8ltm994AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweLfnbh38rQdiSpUR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzlkJsi-8kdthYMpqB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"} ]