Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
half true, half incredibly wrong ... agentic AI ... not there yet, horrible idea to use in business (never mind at scale for large projects). But AI is getting so good at coding, that if you keep the human in the loop you actually get incredible performance boosts. Still using the basic chat interface and looking at the code I copy & paste. Not having the AI create a mess in the background I will have to fix. I correct the moment it goes off the rails and my productivity has gone up by I guess somewhere between 100%-300% during coding (which is just a small part of what a developer does!!!). I really do not want to miss the AI, but I also really do not want to work for big corp in their system of how they deploy AI ... it would suck the fun out of the job/task. But the pendulum won't swing back ... they won't pay more again in the future, they will pay less and less (except for those exceptional minds at the top, they get more because they get the biggest boosts while helping AI to 'function'). And I have no doubt, that there are companies that use AI as a bargaining chip during negotiations ... but in most smaller companies AI is just boosting productivity and reducing the need - less need, more available programmers => lower salaries.
youtube AI Jobs 2026-02-08T10:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyAyBs_48q79LAFtPZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyjqvq4qW8QRNF-9OZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHuJYhBQmtnb95gZh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz_0-AM__B8pmOC9H54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwyuypgIMautTIw6oR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyUQFTqObWkbfN9zBd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxaMsnBZSoR_ahX8e94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxA0HY_aptO0jlD0ix4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKvm5gKfYdvUfVO7Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy8EQNvaLvHnfvIfC54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]