Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a massive point that usually gets drowned out by the "intelligence" arms race. We’ve become so obsessed with O(1) reasoning speeds and context window sizes that we’ve completely decoupled capability from consequence.The accountability gap is the real "black swan" of 2026. If a model makes a decision that causes systemic harm, the developers point to the weights, the users point to the prompt, and the corporation points to the TOS. We’ve essentially engineered a way to automate liability out of existence. It’s not just a technical problem; it’s a fundamental failure in how we define agency.
reddit Viral AI Reaction 1776967302.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohtyd15","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_ohuuqs0","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_ohwdux6","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ohxwt96","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"rdc_ohv3pc2","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]