Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI can guess, dont be the truth. That is why I still prefer TDD, write tests first, and let AI guess the code we should write. If there is a bug, the test was not complete, you have to finetune/extend the test. Tests are the truth, start with that to specify what you want. That is the context for AI to generate the code. Not only markdown files as context and chat prompts, the test should be written by human, with very clear acceptance criteria what you want. Specify still your linting and other code quality check tools, that verify if your code is really in your guidelines. You can add markdown files for more information what cannot be done in the test and other tools. But will AI increase a lot of productivity, I also don't think how much some people say indeed. The creativity and thinking proces is a lot, writing down the specification (tests), ... Writing actual code is just some detail you need to make it work, but should match what you have think out before, and should be very clear. If it isn't clear, AI will just guess some, and don't create what you expect. All your thinking should be written our for AI, and that costs also time. So writing code time will reduce, writing AI prompts and context will increase. Total time you need will maybe change some, but you cannot do twice as much.
youtube AI Jobs 2026-03-18T15:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwUfs2JVQXz6_eT_Dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnhQ1jqNHVzHXa_yp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxYKJpZhZA2NyDJs654AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwEdGljEssG3ptBBYx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Ugzd06y2PTExOj9Apkt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3ZONu-InUDq2zwzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzE7Ylbb21u5lXRSJZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjPQiqs_2VQE8RhkV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzzCaYUb0FIX0vefGx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJBNx58SI-2Qemwe14AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}]