Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After I spend around a day trying to get AI to generate me a semi-passable wallpaper image, I regretted not spending the time improving my drawing skills instead. Sure, I wouldn't have gotten to that result right away, but the point is, I would have actually improved. The real long term problem with AI is that it just doesn't scale well. When you look up a library's documentation online for the first time, it can take quite a while to find out where to find it, the latest version, etc. But as you do it more, you start getting used to it, learn, improve, build abstractions (such as taking notes) and in general get much faster. And here is the problem with AI. You can spend months working with AI, but your coding speed will not improve. You won't get better. There's some things you can improve with AI understanding you better, but it is extremely limited as the AI is a black box, and extremely volatile. This is the main reason why I drastically lowered my AI usage. There are legit useful places for AI, especially when it comes to creating helpful learning examples or getting new ideas (when you ran out of your own), or as a rubber-ducking device. But it's use must always be measured and there's just no way that it is going to make this much return on investment. If companies seriously want to improve their workflow and coding speed, they need to stop focusing on "moving fast" and instead focus more on building abstractions that work properly long-term. Things like well-polished IDE plugins, testing tools, documentation, deployment systems.
youtube AI Jobs 2026-02-08T09:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyAyBs_48q79LAFtPZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyjqvq4qW8QRNF-9OZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHuJYhBQmtnb95gZh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz_0-AM__B8pmOC9H54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwyuypgIMautTIw6oR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyUQFTqObWkbfN9zBd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxaMsnBZSoR_ahX8e94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxA0HY_aptO0jlD0ix4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKvm5gKfYdvUfVO7Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy8EQNvaLvHnfvIfC54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]