Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> People have the library of Alexandria at their fingertips An LLM is not a database and contains no facts. It's a network of tokens and probabilities that can spit out things that often, but certainly not always, align with reality, but it's not any kind of verified reference like an encyclopedia or scholarly work. It's like asking a friend about something. Maybe they're remembering right, maybe they misheard someone, maybe they're relaying a rumor they heard, and maybe they're misremembering or hallucinating. Unlike a human though, an LLM doesn't have any type of declarative (factual) memory. Every output involves rolling the dice, even for the exact same question. Treating any LLM like an oracle of fact is just begging to be mislead.
reddit AI Harm Incident 1772722205.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_o8qw4qh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_o8s8q8a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_o8qt9ix","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_o8rkrkx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_o8sd8b6","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}]