Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's not the point. The point is that if an AI cannot correctly discern prejudices and understand why they are inherently false, *what other human fallacies might it also accidentally pick up*? All these dreams people have of the Singularity or even just smaller-scale stuff where AI turns out to be better\smarter than humans necessarily *require* AI to be able to overcome human fallacies. Otherwise, the AI would just end up perpetuating mistakes rather than undoing them. Which could be disastrous, considering how many people seem to actively want machine masters sorting our shit out for us. Unfortunately, this is the sort of "deeper" understanding of complicated/nonlinear/illogical issues that AI is truly terrible at sorting out, and likely will be for some time to come. It's good that researchers are recognizing that, since at least they can start working on the problem without pretending it's not there. (But seriously. It can take humans *decades* to learn that some people are just ignorant shitheads AND develop the interpersonal skills to spot and avoid such shitheads. Teaching that to a computer is going to be no small feat.)
reddit AI Harm Incident 1492371118.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dgcci0r","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_dgc6bg7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_dgcgdvr","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_ohuh7jy","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_ohueqed","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]