Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not even about students. The benefit of being a student (and guardrails around it) is that ultimately they'll be graded and therefore know exactly what was wrong, why, and what the right answer is. This applies less to essays which AI excel at, but essays have always been iffy things to grade. The problem is in the real world, where there's no graders. Decisions will be made based on some AI output, the outcome won't be as expected, people won't know why, and they'll rationalize it or just not engage (more people than not don't do retros at my work or they'll do them, call out the results weren't as expected, and do nothing to mitigate what made the prediction wrong).
reddit AI Jobs 1738479404.0 ♥ 5
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_magyaib","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_mailzln","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_mais8v4","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_mckt18d","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_mak9s0m","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"resignation"} ]