Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I literally said I have been using it to try and get it to code stuff. It writes great code *that does not work for purpose* until a human comes along and corrects it. The code is absolutely machine written, and I would say my current for-fun-almost-all-ai project is actually 99.9% machine written. But without that 0.1% the whole thing literally does not do what it is supposed to do at all, and the machine cannot figure out why. That is why I and others who are using it say that it needs a person who actually understands what is going on to be there babysitting it. With that assistance it is very capable. The problem is that the human element needs to learn too, and if the AI is doing almost all of the work via vibe-coding, the human will not be able to fix it. I am actually running into that problem at the moment, because the code is trying to do something really complex with a methodology I do not understand, and it does not work. But because I do not understand the methodology it is trying to use, I can't tell it where the problem is. So I am having to go an learn how to do the thing it is trying to do simply to tell it how to actually do it correctly for the situation it is trying to solve.
reddit AI Governance 1757784627.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ne3bv30","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_ne0tt4a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_ne0yz7l","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_ne131q8","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_nx3kwbo","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]