Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the people cheering in the comments are in for some disappointment. It's true that AI is very overhyped, but it's also true that even if the models were to get no better, which is very unlikely, their capabilities right now are probably enough to replace 20% or more of white-collar workers. For most people, AI is ChatGPT or the Gemini app on their phone, but go just one layer deeper into the Claude Code world, and you see terrifyingly capable workflows. Right now, there is a huge gap between what the state-of-the-art models are capable of, which is a lot, and what enterprise has figured out how to integrate, which is not much at all. This gap will close, maybe not in one year, maybe not even in three years... but it is going to happen fast. A lot of people expect AI to work like magic out-of-the-box, whereas in reality it is like any other tool. It's taken me a lot of effort to get it to the point where it reliably does what I need it to do, but now that it is there, it is barely indistinguishable from magic... and it's getting better all the time, both in terms of the underlying models and also the ecosystem of tools that improve the models' abilities and effectiveness. Even just taking the software engineering field, so many incredibly accomplished developers over the last six months have had the "oh god" moment where they've realised that current models, along with the right skills and tools, are better than them across many of the dimensions they work in. If it sounds like I'm excited by this, I'm not. It's terrifying. I just don't think it helps us to have our heads in the sand.
youtube AI Jobs 2026-02-09T09:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy_NNX8yuWvFbcL9Wl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzoB15R1hyPrrN9t_l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyJw7T5dqFhQTzPulR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugzmmk-u4olrYJDlLid4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyg_tZQuqKI71GL8PR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgydXZIZONB4ZXK6Nq54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzH9egMsZLgpM1mveF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyt7QbtVRua5nEllqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9jt2xdD9xS07ivZV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwnt3uNH5kkT6tG0rB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]