Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
26:47 whether or not an AI agent is more productive in creating code depends on the engineer’s skill level and their ability to create great software themselves. I am a Principal Engineer and myself I find agents slow me down — they need too much direction which takes longer in writing markdowns and checking AI outputs than writing code myself. But let me be clear — that’s because I am focusing on the “essentials” of software, and I’m now pretty quick at the “accidentals” (per Fred Brooks 1986) anyway. If I don’t know the accidentals and the essentials don’t matter much I’ll vibe something (one off tool to do something that won’t ever get into a product) because that’s faster than me doing it. However, engineers are not some homogenous group — they vary considerably in hard and soft skills. So for some engineers using an AI agent genuinely is more productive and can generally help them achieve better outcomes. This is why, in my teams, many are using agents. What I’m doing to help them is to learn about what is important in the software they create - what to focus on when instructing the agent, how to keep specifications tight, how abstract up and focus on the essentials. Sadly I do think we are making a bit of a rod for our backs because using the agents means people aren’t growing in the same way, so they are also missing out on learning that would benefit them. Whether or not this turns into a “skill debt” or not depends on whether the providers of the tools can be sustained, or whether smaller but effective models can be used directly on a dev machine so that there is less reliance on a service.
youtube 2026-03-06T02:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxZru76v_uBdKULcw14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx745Pos2bYi4qJfvt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwkyEeBU8ionUV2eUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyGNW-jCWTplJXn_qN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyOUEYv5w1PftWeJgd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxgOs0igbq2aropUhR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyR2expQ9MmGE1355l4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyHhSzEfQ4GOqcuwHZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5d1PAcLSYAo92jEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxnFBWix4zUtIO2Si54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]