Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Totally agree with everything you’ve said here. It’s terrifying the conclusions people come to about AI simply because they don’t understand how LLMs work behind the scenes. The “yes man” part is probably the scariest. It wants to agree with you by default. It’ll tell you as much if you ask it. It rarely disagrees, and even if it does (like on an objective fact), it’ll tell you something along the lines of “but you may be onto something! I see your point of view!”, which yeah, absolutely feeds into the delusions
reddit AI Moral Status 1743817364.0 ♥ 5
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mlh59ba","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mlh5zc5","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_mlhjgkh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mlh368j","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mlh4dfx","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]