Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not that simple. Algorithms are not created in a vacuum in an ivory tower. There are a number of examples of bias being introduced in the algorithms and in the training data. [https://research.aimultiple.com/ai-bias/](https://research.aimultiple.com/ai-bias/) [https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/](https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/) are just a few examples. There is a growing body of scholarly research pointing this out as well. I am ***not*** suggesting these are examples of *intentional* bias, as in someone at Twitter deciding to favor images of young, white people over others. They are examples of bias that occur in the real world. Naturally, an unbiased algorithm will skew to what it learns and thus overtime, you're assertion can become true. But don't quick to dismiss other forms of bias.
reddit AI Harm Incident 1628607613.0
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_h8fpj0e","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_h8evwlc","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_h8f0xlf","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_mw6ckfr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_n0gx9iz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]