Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>both responses make sense given the stakes. Honestly I do not think both responses make sense. While I understand the appeal of the automation, I think it needs to be both practically and visibly sequestered. AI can be *extremely* useful as a tool in programming, but its error rate is significant, and likely always will be. Even with really minor tasks it sometimes just loses the plot, and with complex ones it can end up going in insane directions when given too much leeway, breaking everything in its way. So making sure people at least sign off on anything it does gives you a place to put the blame. If someone breaks something, it was not an AI doing its normal AI thing that cause the problem, it is the person who is letting the LLM *do their job for them.* Without some kind of accountability guard rail it is just going to keep inserting itself into everything, and building more and more technical debt until nothing is maintainable. Ideally these guardrails would be foundational to how these systems are deployed, but it has already been demonstrated that AI companies have zero ethical boundaries, and the companies that push them just want the hype. So until we have governments that feel like protecting their citizens, smaller organizations need to implement it on a policy level.
reddit Viral AI Reaction 1776613640.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oh3ee80","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_oh3j1nf","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"rdc_ohf5qu9","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_oi2py6c","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_oh3e3td","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]