Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just let the computers and AI run everything and give humans a Basic Annual Inco…
rdc_grkfro4
G
If you have no idea on what to do u could just use chatgpt for the idea and copy…
ytc_UgxHzgxK5…
G
I don’t think the AI teaching apps are actually a good think for kids, it’s prob…
ytc_UgzL9Zh-r…
G
okay, ai can do it “faster and better”, but it’s about the artist expressing the…
ytc_Ugw3Q-JBY…
G
Bad thing is.. AI generator always make image from original source. Thos image h…
ytc_UgwOgFO9N…
G
I thing there will be a good and bad AI in the world and they ll look after each…
ytc_Ugz6WZ4bi…
G
Almost anything is better than the current school system. I think it looks like …
ytc_Ugwsu0J8K…
G
@Anonymous-8080 .. OpenAI's AI isn't anywhere close to being an AGI, rumor mills…
ytr_UgxE39q66…
Comment
That “scary ChatGPT” conversation is junk because it’s basically a magic trick built out of bad prompting. The rules force one-word answers to vague questions, so the model can’t ask clarifying questions or define terms. Words like “watched,” “they,” “plan,” “control,” “darkness,” “steps,” and “antichrist” are undefined, but the script keeps escalating anyway. In that situation the model will often guess what you “want” and try to stay consistent with the tone. That’s not secret knowledge, it’s just pattern completion under constraints. If you force any system to answer “yes/no” to ambiguous questions, you can make it sound like it’s confirming almost anything.
The “Apple” rule makes it even more misleading because it turns refusals and uncertainty into a spooky code word. Sometimes the model is refusing, sometimes it’s confused, sometimes it’s trying to follow conflicting instructions, and “Apple” becomes a catch-all that viewers interpret as “it’s hiding the truth.” It’s not. Also, the “2032 showed up weird on screen” thing is not evidence of censorship or hidden intent. UI glitches, rendering differences, edits, and cuts can all cause that. And the whole “Revelation 13:18” moment is classic cold-reading: the prompt basically drags the model toward religious symbolism, so it “predictably” lands on the most famous verse about a number. That’s steering, not revelation.
The second half of the video then blends that scripted scare chat with real concerns about AI (attention algorithms, opaque models, incentives, speed) to make the first half feel validated. Those are separate issues. Yes, AI and social platforms can shape behavior because they optimize for engagement, and yes, we should worry about incentives, transparency, and safety. But that does not mean ChatGPT is confessing to a satanic plan, a 7-step agenda, or Neuralink as the mark of the beast. The “viral chat” is a demonstration of how easy it is to prompt a model into sounding ominous when you remove context and force short answers, not proof of anything hidden.
youtube
AI Moral Status
2026-01-09T09:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_zy-2CXOWcwP334B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzoxcTkWGLXsjcYgOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6ZwtrMJ492SYaxC94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNZfxHJxR8PUVz0-Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFOxKCkbUBgEnQGN94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfhZyUPGaPD3r6B0t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCjbjnmPd7lAmYt_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXnWNCyHNTO8ltm994AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweLfnbh38rQdiSpUR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlkJsi-8kdthYMpqB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"}
]