Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah, "remembers too much junk" is exactly it. Decay isn't just context economy — it's character. Humans forget, so the AI should too. On grounding — there is a memory-grounding instruction in the system prompt that tells the model to check its actual context before agreeing to "remember when X" assertions, and to gently push back if it has no record. Works most of the time. The "sometimes folds" thing in the post is when context blocks are summarized loosely enough that the model can't tell whether something actually happened or just feels plausible — that's where it gets agreeable to be helpful. A dedicated world-fact layer is exactly the angle I haven't tried. Hard facts are scattered across character rules, scrapbook, place facts, and milestones right now. Pulling them into one queryable layer — true / false / unknown rather than soft summary — would give the model something firmer to push back from. Adding it to the list.
reddit Viral AI Reaction 1777020805.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohz8yx1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_oh26w5y","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oh13lhs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_e13m18o","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_e14oe9g","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]