Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I design retail displays and a certain computer retailer in the states asked us …
rdc_l5vdpr2
G
Robot came off that truck like a true Chad and gave us a proper German salute…
ytc_UgxSNapC7…
G
The Ai need to learn to take the bus to go to the site to play chess as well as …
ytr_UgySeoI8-…
G
Stephen, you are doing the human community a great service with this videocast a…
ytc_UgzjWC2Ve…
G
Great vid. You were NOT too harsh on these lawyers! BTW, I teach college and yes…
ytc_Ugzj-Kt1H…
G
I work in a company that pushed LLMs a lot and has plenty of highly skilled data…
ytc_UgzWhG8LR…
G
It's not that simple. Algorithms are not created in a vacuum in an ivory tower. …
rdc_h8f0xlf
G
This is mostly just a hope of mine, but there's some evidence that public sentim…
rdc_ohezphn
Comment
The point about 'Info In, Info Out' jobs being dead is the hard truth devs need to hear.
But I'd argue 'Coding' isn't disappearing, it's splitting. The 'Info In/Out' part (writing code/Orchestration) is definitely gone. But the Judgment part (Validation)—knowing if the AI's output is secure, scalable, or hallucinating—is becoming the most valuable skill in tech.
I see too many engineers trying to compete with AI on speed (Orchestration) instead of pivoting to quality (Validation). I broke down this exact survival strategy in my podcast (The Spark and The Forge, Ep 78) for anyone who wants to know what 'Validation' actually looks like in practice.
youtube
AI Jobs
2025-12-30T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwDezrDsJUls3vTqVZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_a_gNqECo-EeKOad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJYL1gmnln4q92fV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2k4dDwZIjqDxlT9F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgywwmxsXya1LS8FBGt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNcdUk0eYoqCPm6g94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXc1DMsKbu0jX-SJp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYVeU3lIqDn1uUkx94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxpWDWcU6nWYwoKFvh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyp2K6s0fa6Ta2slBl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]