Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The difference between fully hands-off and managed software AI development is whether you are interested in what is happening under the hood. If you are only interested in whether the program works or not, then hands-off works, at least at first. The challenge with hands-off and AI multiagent code is that it often rewrites the entire program when you change it. This becomes expensive in the long run and can cause big problems. So they take a big risk without knowing what the consequences are. The main reason for this is that features can be added to the program in many different ways and depending on the situation the implementation method should change. And in order for the AI ​​to be able to choose the best way to implement it, the documentation should be really comprehensive, and include hypothetical scenarios from the program life cycle. In which case the whole speed advantage disappears. Personally, I use the 1. ask 2. plan 3. implement method mentioned here. And very often the 1. ask phase is not as short as you might think to get the AI ​​to understand what I want. But overall it saves time. But I've made prototypes using the first method myself and it works great for that. Personally, I believe that the future lies somewhere in between and both will be used. Probably even in the same project. But this requires development of the tools that are currently used. For example, the base code can be done in more controlled ways and customer customizations are done by the multiagetti AI supply chain.
youtube 2026-03-08T12:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx1hwG8jz7n3yJxuPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7YUHRLGT7DF3u0MB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyYAEMw2phKmpiuAnd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxpkz7pNBSVtJPySVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-QN3JCKp6Sb41OA14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoSrFnMczjgsf2WVV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyZ_tpXB9P1zyZG5i94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOp5fL57s3ZJ6BuXx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4tXNjmzp6h88rx7l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0UTjUwKMuAwzJoPZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]