Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, the issue with this is that this is how some of these models work - Generative Adversarial Networks have two parts - one that comes up with the fake images, the other that tries to determine if the image is a real example. The generative model optimizes itself to try to fool the discriminating model. So, to some degree, these models are already training themselves to fool AI.
reddit AI Harm Incident 1670633871.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_izmyyis","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_izkuu2e","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_izkwhsh","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_izl1dfg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_izlqym7","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]