Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem even with Opus 4.6 is that 1) it doesn't give an f about convetions (unless explicitly explained), and 2) it never refactors. Both of these lead to the codebase getting more and more and more entangled over time, and ultimately, the LLM cant keep up with all of the hacks it has created, cracks begin to appear and, because the control flow and execution paths are so overly complex, there's no really easy way out except rewriting the whole thing from scratch. And I'm in no way saying that this is a bad process (software development would be iterative anyway, with LLMs or not). But I'm saying that there's simply no way to get rid of the human developer just yet. Except if you're developing something trivial, like a todo app or a simple flight simulator that no-one will ever play.
youtube AI Jobs 2026-02-08T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxyTDzUWxd-iuB8RlV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyzJVQ3LXdxwSOeAAB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwsVLZCEsuftd1bOSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx8HqC9MjFBs0Wz5lF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxFHzsdu0J1uxNv7Vh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyJNlghgFdOBHTFcdx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy3qiOmab-UcwxR4lR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxJvCF8me8wOVNoAZV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyq3uw8erjOwOyconZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"frustration"}, {"id":"ytc_Ugz4XNlFd5depzlP0i94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]