Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I always assume that some of the biases in society are justified and some are not and that using AI would be a really interesting way to learn more about which biases are justified and which ones are based on ignorance. After all, computers have no ego or sense of self preservation. They have no favorite or least favorite people. A computer that absolutely shits on a person while running one algorithm could absolutely love on that person when running that same algorithm using different data. That's about as far from hate as you can get.
reddit AI Harm Incident 1625894115.0
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_h4phxy7","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_h4nsr5f","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"rdc_h4o81td","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_h4mxg61","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"rdc_h4mytpa","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]