Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s almost like AI supremacy is basically The One Ring. while everyone endeavor…
ytc_UgwB04mSG…
G
4:05 “We aren’t X, we’re Y”; “it isn’t X, it’s Y”. This script was written by Ch…
ytc_UgwKxShHu…
G
I do think there are some interesting debates to be had about what is considered…
ytc_UgwZboY-C…
G
My empathy prevents me from not treating things I become attached to with dignit…
ytc_UgwHIP-3f…
G
Because every corporation now embraces the old Decimation systems whom the inven…
rdc_czlaccg
G
This video truly moved me, thank you for cumulating your thought-through opinion…
ytc_UgxCrdBxG…
G
I feel like scrpited videos aren't a perfect analogy, since people do watch and …
ytc_UgwhBco4e…
G
There is no program anywhere that can tell the difference between the low qualit…
ytc_UgwhapIfc…
Comment
This video contains some good points, but other parts of it are just downright wrong to a degree that it looks kind of embarrassing. So much of this is making generalizations based on models from 2024 or before, but Claude Code and modern Codex models are way, *way* better. Conflating them without being specific is shoddy.
By the way, I agree completely with the point that it’s stupid to assume AI will replace humans. That is not the issue. The issue is that the video makes technically incorrect explanations and then uses those to justify them. Let’s take the argument around 10:20: “AI can hallucinate.” Sure, ok man: we all know that, that’s table stakes. Yes, the engineer using Claude to engineer their production database was absolutely stupid. But your generalizations here simply don’t hold water, and they lean on technical tropes like “until we fix the reward system, we’ll never replace human engineers.” (Vague, unjustified…)
You can make good points without resorting to broscience. The points about dangers of replacing humans are good. The false technical generalizations are simply not it, though.
No offense intended, love to see your videos and enthusiasm
youtube
AI Jobs
2026-03-11T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwNsc2j87xj_d6K9sp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxaPOl4MKwV1_L1cmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzgml00kvFFaYkij-R4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwfBwlXj8E6cRIVHPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugya5c2gsguTds5XTa54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzm7i1eNUUH_UCmy794AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzrodSy5xUDiVgtajl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwvHT5uye2tdNn7zyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy24qDFGUO1e-lY_td4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyan-UTjFf7BTpRkCd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]