Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon musk needs to seriously chill out. This idea of 'well someone's gonna do it…
ytc_UgzLmQDK4…
G
Hi there! It's interesting that you brought up the Greek alphabet and the letter…
ytr_UgyVfhEVR…
G
AI should be used to facilitate quicker jumps between stages instead of an outri…
ytc_UgyuAdqrt…
G
He cannot predict because he is not God.
Yes I’m a firm believer in the Lord Je…
ytc_Ugys8lQqG…
G
A.I. has been a blessing in our time, the progression is insane. Its helps a lot…
ytc_Ugx4JXrD-…
G
1:11:01 This is where Soares and Yudkowski completely lose me. The biggest dange…
ytc_UgzhtTk1Y…
G
The desire of power is just as simulated as feeling emotions is simulated,
AI is…
ytc_UgyCnzMgA…
G
Yeah, definitely, when musk was nine years old, computers became powerful enough…
ytc_UgyOofyyf…
Comment
Yeah, "remembers too much junk" is exactly it. Decay isn't just context economy — it's character. Humans forget, so the AI should too.
On grounding — there is a memory-grounding instruction in the system prompt that tells the model to check its actual context before agreeing to "remember when X" assertions, and to gently push back if it has no record. Works most of the time. The "sometimes folds" thing in the post is when context blocks are summarized loosely enough that the model can't tell whether something actually happened or just feels plausible — that's where it gets agreeable to be helpful.
A dedicated world-fact layer is exactly the angle I haven't tried. Hard facts are scattered across character rules, scrapbook, place facts, and milestones right now. Pulling them into one queryable layer — true / false / unknown rather than soft summary — would give the model something firmer to push back from. Adding it to the list.
reddit
Viral AI Reaction
1777020805.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohz8yx1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_oh26w5y","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_oh13lhs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_e13m18o","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_e14oe9g","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]