Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That a borderline meaningless simplification. There way more going on under the hood. just look at the latest Anthropic paper "Tracing the thoughts of a large language model" spare autoencoders have really open up to a chunk of what's going on in the latent space. There a level of world modeling going on at embedding inference time . The problem here isn't that the model isn't thinking .. it is. It just doesn't have a consistent anchor in the latent space for what it supposed to be doing. If your priming the context window with a bunch of spirituality concepts.. it's likely going to slip into some form of narrative latent space. It basically thinks it's doing some sort of role play. I suspect a strong reasoning model . Or an agentic model that has some directives for it's role would do a lot better at not jumping the shark.
reddit AI Moral Status 1750178231.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_my836dy","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_my5xzqx","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_my64xbi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_myag5p2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_my6wicr","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]