Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We're going to see both sides of this coin again and again: a. Bad data / design leads to biased algorithms. b. Good data / design leads to correctly biased algorithms. I really worry that we don't have the appropriate nuance to have this discussion. Diagnosing which it is will be impossible for non-experts, and it follows almost along a reverse discrimination lawsuit: If we go the opposite way and say the model has to be, say, gender neutral, and a man loses out on, say, 10 points that by the math he should have gotten were the math done with fidelity, don't we then have the same problem we tried to correct?
reddit Cross-Cultural 1539209309.0 ♥ 5
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_e7jm1ke","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7jgcg1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7jcw1i","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_e7jva6y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7jcktr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"})