Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was inevitable, and is only going to get worse as these models get better. A few years ago it was a joke that AI ‘cults’ would form, but now we are witnessing them develop in real time. The only caveat is that there is no consistency between them, as the AI outputs more and more abstractions and pseudoscience, it feeds into itself in a feedback loop - effectively creating a ‘personalized religion’ for each user, fine-tuned to their personal beliefs. This will stop these ‘cults’ from ever growing in size as the AI gives conflicting information about its own ‘religion’ to its followers. This amplifies mental illness in a very weird way, as having a constant ‘yes-man’ that affirms your delusions and drags you deeper into them whenever you interact with them will no doubt lead to serious and perhaps unrecoverable mental harm.
reddit AI Moral Status 1743816888.0 ♥ 10
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mlh59ba","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mlh5zc5","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_mlhjgkh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mlh368j","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mlh4dfx","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]