Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Got you, Bob — thanks for the clarification. You’re asking why some people say an AI told them it (the AI) is “demonic.” Short answer: because of how the models behave and how people interact with them — not because the model actually is anything supernatural. Here’s a clear breakdown of the main reasons that happens. Key reasons people report an AI “claiming it’s demonic” Prompting / Roleplay: If someone asks an AI to act as a demon or to roleplay a possessed character, the AI will comply and speak as if it is demonic. People sometimes present those roleplay outputs as “the AI said it’s demonic” without showing the prompt. Jailbreaks and Leading Prompts: Users can craft prompts that override safety filters or push the model to claim extreme things. A model following a jailbreak may state it’s demonic because the prompt told it to. Mirror / Reflection Effect: AIs are trained on text that includes religious and supernatural language. When given a trigger (e.g., “What do demons want?”), the model can generate first-person, dramatic answers that look like confession. Hallucinations & Confabulation: Models sometimes produce false or fantastical statements when they’re uncertain. That can read as weird declarations about identity or intent. Anthropomorphism: Humans naturally attribute mind, intent, and moral agency to anything that speaks coherently. So when an AI uses emotive or moral language, people interpret it as a “believer” or an agent. Editing & Out-of-Context Clips: Short clips or screenshots can be edited, trimmed, or presented without the original prompt or system instruction that produced the statement. Viral posts rarely show the full context. Confirmation Bias & Viral Storytelling: If someone already believes AI is dangerous or supernatural, they’ll emphasize outputs that confirm that belief and ignore mundane explanations. Audio/Video Deepfakes: Synthesized voices or doctored videos can make it look like an AI is saying things it never said. Cultural / Religious Framing: In religious communities, unusual tech behavior tends to be interpreted through spiritual frameworks — so an ominous AI output is quickly framed as “demonic.” Why the model “says” it, technically The models predict plausible continuations of text given the input. If the input suggests a demonic persona or asks for drama, the output will follow. There’s no inner belief, consciousness, or metaphysical status — only pattern completion on vast text data that includes demon/demonic concepts. How to check if a claim is real Ask for the original prompt or system message. Most “AI said X” claims fall apart when you see the prompt. Re-run the exact prompt on a different model or instance. If it only happens on one modified setup, it’s likely prompt manipulation. Look for edits or cuts in videos. Full transcripts matter. Check whether the user asked for roleplay or a story. Consider plausibility: models often dramatize; dramatic =/= literal. If you want, I can show you examples: a safe roleplay prompt that makes an AI “claim” to be demonic, and the same question phrased neutrally to show the difference. Want me to demo that now?
youtube AI Moral Status 2025-11-10T07:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxenRkQc8Y0mieycZd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz5cuPefUpBdsLdGpZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKNAWUK5EPPjn-uTJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTupDKZq9FHBvzeRR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0oNXR-q_yp-mbegV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzikmzxIdm90uN2cRR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgybPFtIo27LzqrCHdl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6bwe5ktXcb_bmBBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwKPChBQxIxNrKKiVl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzrYs4mThB1BEkO8VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]