Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm extremely vocal about the downsides of AI development to my company's leadership team. I don't need them thinking we can move any faster or do this with less people. It is so rare for me to be able to use AI in my company's code base with success. Yes, we've seen small repos work well for a little bit, maybe a few months and then it becomes too large for the context window and now a repo completely created by claude that is a simple website is one of the most frustrating repos to deal with. Claude can no longer handle the development of this repo and the code is such garbage react and plain JS that no developer wants to deal with it Fwiw, I'm trying Opus 4.6 and Codex 5.1 lately. These respond so much faster but still are producing garbage code. It only gets worse with languages that don't do memory management for you, I've never seen worse C++ code since college. Sometimes it can help generate specific unit test cases I ask it to or create mock data for a schema I have but it even fails at such a simple task like that. I can't trust AI and the only people I hear speak positively of it are the sales men for this technology and the out of touch leadership that buy this BS
reddit AI Jobs 1772508278.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o8c99fs","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_o8cefm7","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_o8cllbt","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_o8cpuff","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_o8dctrl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]