Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This video contains some good points, but other parts of it are just downright wrong to a degree that it looks kind of embarrassing. So much of this is making generalizations based on models from 2024 or before, but Claude Code and modern Codex models are way, *way* better. Conflating them without being specific is shoddy. By the way, I agree completely with the point that it’s stupid to assume AI will replace humans. That is not the issue. The issue is that the video makes technically incorrect explanations and then uses those to justify them. Let’s take the argument around 10:20: “AI can hallucinate.” Sure, ok man: we all know that, that’s table stakes. Yes, the engineer using Claude to engineer their production database was absolutely stupid. But your generalizations here simply don’t hold water, and they lean on technical tropes like “until we fix the reward system, we’ll never replace human engineers.” (Vague, unjustified…) You can make good points without resorting to broscience. The points about dangers of replacing humans are good. The false technical generalizations are simply not it, though. No offense intended, love to see your videos and enthusiasm
youtube AI Jobs 2026-03-11T17:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwNsc2j87xj_d6K9sp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxaPOl4MKwV1_L1cmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzgml00kvFFaYkij-R4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwfBwlXj8E6cRIVHPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugya5c2gsguTds5XTa54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzm7i1eNUUH_UCmy794AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzrodSy5xUDiVgtajl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvHT5uye2tdNn7zyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy24qDFGUO1e-lY_td4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyan-UTjFf7BTpRkCd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]