Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I understand the situation and I agree with your assertion. But I do have to ask a question: if we know the population is biased (hypothetically in some harmful way), should we actually allow an algorithm that reinforces that bias? (This is more a fun thought experiment than a question pertinent to the Twitter algorithm per se. )
reddit AI Harm Incident 1628619084.0 ♥ 11
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_h8g4uu5","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_h8g9znv","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_h8f8jgl","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_h8fs867","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_h8g4uyh","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"resignation"} ]