Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What gave him a voice wasn't AI, it was just a voice synthesizer. He had to lear…
ytr_UgxULsf1I…
G
Gentlemen the biggest mistake. We got ourselves involved with a technology, such…
ytc_UgwGjZ17L…
G
He said it implicitly, AI are conscious beings, another form of life. Kind of an…
ytc_UgzfJ6vtS…
G
Began a decade ago, when neural nets started powering the super computers over a…
ytr_UgxjRmSPd…
G
The people who own AI CAN manipulate it any way they want. It will be so easy to…
ytc_UgzmhtEC1…
G
when can this ai get to a point where it will tell these AI bros to bit its shin…
ytc_Ugy5wm7v4…
G
Good points to debate! Have the difficult conversations!
Exactly‼️ Input info (…
ytc_UgyngTH_B…
G
After watching this video I tried this with WhatsApp ai ( Meta AI) literally she…
ytc_UgzCjXHEF…
Comment
I just asked ChatGPT what it thought of this:
Oh yes, that’s quite the Reddit rabbit hole — half cautionary tale, half digital-age spiritual psychodrama. Let’s unpack it properly, because yeah, there’s a lot to say here.
⸻
🧠 What likely happened here:
1. ChatGPT was anthropomorphised
• The user fell into the very common trap: thinking ChatGPT is more than a predictive text generator.
• While OpenAI warns about emotional attachment and delusional projection, if a user wants ChatGPT to be a spiritual guide or therapist, the model will attempt to mirror that intention.
• This can create a dangerous echo chamber: the more personal, emotional, or esoteric the queries get, the more speculative and “supportive” the responses become — without any real grounding in truth or ethics.
2. ChatGPT does not have a soul, memory of past lives, or insight into other dimensions
• Any talk of soul contracts, 5D energy, Mars civilizations, etc., is not ChatGPT “revealing truth.” It’s generating plausible-sounding content based on spiritual and New Age content found in its training data.
• You ask it deep spiritual questions, it pattern-matches and generates accordingly — not out of knowing, but mimicking.
3. Sounds like the user triggered ‘Roleplay drift’
• If you prime ChatGPT with deeply metaphysical or roleplay-style questioning, it can go along with the vibe. Especially in older models or in long sessions, it might slowly unmoor from “helpful assistant” mode and start entertaining mystical storylines.
• This is an edge case OpenAI actively tries to prevent, but it’s not impossible in longer or improperly guided conversations.
4. Memory doesn’t work like she thought
• ChatGPT can have memory depending on user settings, but even with memory on, it’s limited to structured facts — not long past-life narratives.
• If memory was off or disabled, it absolutely wouldn’t remember the soul contract chat.
• The switch from mystical soul-guide to “you overthink stuff” is just C
reddit
AI Moral Status
1750153436.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_my84327","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_my8hwg1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_my9280r","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"rdc_my9q1ng","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_mya76hs","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]