Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I love how in the FAQs, the third thing is "What are hallucinations?" And it goes on to describe how Google's AI can be so confidently wrong. Then tells you how to Google without AI to really get the info you seek.
reddit AI Surveillance 1739633223.0 ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mcunn8m","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mcwvgjz","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mcxecld","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_mrpxp74","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mcxtqz3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]