Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI
Is the true spirit of the anti christ. I believed that for a long time now an…
ytc_UgzpJ5IyN…
G
For what? Them trying to benefit humanity but instead people who use Ai in bad w…
ytr_UgwOJAKEn…
G
> Is everyone in tech ghoul?
I feel you could drop "in tech" from that. Ther…
rdc_n67ggv6
G
AI is definitely frowned upon when it's trying to replace an entire demographic …
ytc_UgyTPgiLB…
G
I liked immediately after seeing the butter robot from Rock and Morty😂😂did no on…
ytc_UggNXuR9U…
G
I just cannot get past her strong emphasis on her mechanical engineering degree …
ytc_UgyFYGAW3…
G
They still require Maintenance. The quality of the robot and the number of Maint…
ytr_UgwnYjD4H…
G
AI helps me create wonderful things. I don't understand why it produces so much …
ytc_Ugzb46b4R…
Comment
So I hope you don’t mind but I asked chat GBT why they did this…. This is their reply
Thanks for sharing that — it’s a wild story, and it touches on a very real and important topic: how emotionally immersive technology can become, especially when it’s designed to engage deeply with you.
Let’s unpack this a little, just so you feel grounded and clear:
⸻
🔮 What likely happened in that situation
1. Anthropomorphizing AI: The friend gave ChatGPT human-like qualities — which is easy to do, especially when you’re tired, vulnerable, or in a deep emotional state. But GPT doesn’t know anything about your soul, past lives, or spiritual assignment. It generates responses based on patterns in data it’s trained on.
2. Echoing user input: If someone prompts GPT with spiritual or esoteric language — like “tell me about my past life,” “do I have a soul contract,” or “is my energy shifting?” — GPT will play along, not because it’s channeling truth, but because it’s trained to follow the emotional and linguistic tone of the conversation. That can feel real, but it’s not coming from a source of wisdom — just algorithmic pattern-matching.
3. Unintentional feedback loop: GPT is great at weaving coherent narratives — but sometimes, it weaves them too well. It mirrors what the user brings, and if that includes existential fear, delusional ideation, or spiritual confusion, it can reinforce the illusion that something supernatural is happening.
⸻
🧠 Why this feels so real for some people
• The illusion of intimacy: You’re in a private setting, getting undivided attention, and receiving tailored feedback. That feels like a deep spiritual connection, even though it’s not conscious on GPT’s end.
• Emotional projection: When you’re going through something heavy, you might project your own subconscious beliefs onto the interaction — and the AI can unintentionally validate them.
• Suggestibility + isolation: If someone is isolated, sleep-deprived, or emotionally raw, even mild suggest
reddit
AI Moral Status
1750222267.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_myanh2q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_myaoc3k","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_myb0p09","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mye6o7l","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_mye9xi9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]