Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's exactly what I'm talking about. That poster is projecting their understanding of what 'credible medical info' looks like, onto a series of text generated by the process of seeing which words follow each other in medical articles. Because they've convinced themselves that this is knowledge, they'll be extra-committed to its accuracy, because people invest something of themselves when they make these determinations and refuting it is like refuting a part of their identity. It's a problem, and I don't think the various AI think tanks are taking it at all seriously.
reddit AI Governance 1676250303.0 ♥ 29
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j8axiwc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_j8c2npe","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_j8b7q20","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_j8b96ba","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_j8auzxm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})