Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've been vibe coding my own program for the past few weeks and I'm also in a masters program for IT, so I have written code before and understand a bit about computers and programs. I cannot believe major corporations are allowing AI such as this to write code within their system architecture.. it's so miserable. I have claude max and it's just endlessly working in a circle, i will tell it exactly what was wrong and why it's logic is flawed, and it just ignores me and still writes that flawed logic. it constantly forgets the entire goal of the project/task and gets tunnel visioned on that immediate task without any consideration how it plays into the larger picture. especially if you use claude code rather than chat. i will tell it constantly, "please remember the larger picture and the goals of this project, keep them front of mind, every change needs to work within the larger system. then it just rushes through the most immediate task ive asked without any consideration. then i have to yell at it for being short sighted and tunnel visioned while ignoring all my actual ideas for the system architecture we are actually trying to build. ive found using claude chat to work through the logic and provide me with code and commands i can run in the terminal and codes i can actually read through and verify its logic and how it plays into the larger system is a much more effective way to use it even though it's slower, we can really break it down and think about every line of code, what it does and how it will play into the larger system.. which is a fairly effective tool. but very time consuming. i have to imagine for nearly any senior developer it would just slow them down to use it in such a way.
youtube AI Jobs 2026-03-27T13:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxtK7seRzcQ6VEGZ5R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzDfg-reea5cUKMQC94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw9FNZFgkKEcZ9YhQ94AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwYrsuqKgIj9VEf_3d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxCFRK_mPEnIktKgpp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwxPb-L5xjPUetv01t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyvJwQm1uBD6o9KRDR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyw4cxA96J28H1S9bt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzEe9bol_-jzLMc2PR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyY0ca8QLGEbHvZnT54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]