Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is very, very good at giving you what you expect to see. Ask it to program something, and it will output code that appears correct. Ask it to review a long document, and it will output something that looks like a summery. I've heard it referred to as the rock problem. Take a picture of a rock. Ask AI what type of rock it is. It will tell you that it's identified the rock as blah blah blah, and give you details about that type if you wish. Is it correct? Well, most of us aren't geologists. We don't know. But it looks like what we expect to see an expert say. A lot of management exists in a world where they don't understand exactly what their subordinates are doing. They've relied on listening to people and judging how accurate it sounds. AI is like catnip to these people - it outputs sounds like what a skilled person would say. Combine this with the fact that AI companies are often at the grow-or-die stage of VC funding, and as such, tend to wildly oversell their capabilities. It's a perfect storm.
reddit AI Responsibility 1754848648.0 ♥ 98
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7yxgpm","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_n7yp39x","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_n86u1hz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n7z5f3z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n8b58to","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]