Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
actually, since the problem is *very* highly biased to seeing negative examples, and you can go back to the old fashioned method of checking if it's the right person if it fails, it's probably better to skew towards false positives than false negatives. Because a classifier is *going* to have to make that trade-off.
reddit AI Harm Incident 1530824453.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionindifference
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_e1tvb3c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_e1ul7av","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_e1ur9l2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_e1uxf04","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"rdc_e27epfp","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]