Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
\> Especially as AI hallucinations, if missed, can be introduced as part of the record that future AI models draw from. This is, I think, what people are missing, not just in this case but across a variety of fields. AI will generate bad results that other AI will then ingest and reinforce. It's a feedback loop that will especially apply to results that people want as results. In other words, AI is going to amplify attractive lies.
reddit AI Jobs 1753634157.0 ♥ 931
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n5i6ohq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n5gu01t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_n5gfmbw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n5ge6mq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n5ghajz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]