Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It actually gets lots of details wrong -- for example the buttons and button-holes on the first guy's jacket do not line up. Humans are really tuned to automatically try to read text (e.g. see Stroop effect) so the errors in the text are much more cognitively obvious to us.
reddit AI Moral Status 1747834375.0 ♥ 12
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mtgdu6k","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mtgpu0j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_mth4ltn","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mtgybqj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_mtgs7yl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]