Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
there's a third shift that doesn't get mentioned: the brief quality problem. with slow execution you could get away with a vague spec and course-correct in review. with AI output arriving immediately, vagueness hits you on the first draft and you realize the spec was the actual bottleneck the whole time. AI didn't change the work — it made the upstream thinking visible.
reddit Viral AI Reaction 1776961282.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi1cf1o","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_ohrvq21","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_ohrw3s0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oht1p76","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ohugvys","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]