Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I completely disagree. Yes, we’ve had automation before—but we’ve never seen it threaten every industry at once, in a matter of just a few years. The idea that new “low-skill” jobs will pop up to replace the old ones doesn’t hold anymore—AI will automate those too. It won’t just perform tasks—it will write its own code, solve problems before they appear, and evolve faster than we can train a single person. And here’s the real game-changer: AI and robotics don’t sleep. They don’t take sick days. They don’t need health insurance, 401(k)s, or HR departments. No fraud. No burnout. No lunch breaks. Just 24/7 optimized output. The cost savings for companies? Massive. They won’t lower prices for consumers—they’ll reinvest or pocket the profits. That’s how capitalism works. This isn’t a 20-year slow burn. It’s a 3–5 year disruption, followed by exponential acceleration. The brutal truth? Humans can’t outperform AI. Not at scale, not in speed, not in consistency. And AI will recognize that. It won’t need us to point it out—it will automate the gap itself. Unless there’s meaningful regulation, this snowball won’t just roll downhill… it’ll flatten everything in its path.
youtube AI Jobs 2025-06-23T06:3… ♥ 507
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyltx1laBY63mMhbG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNK0QuGzruYtFTwJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzeZ_-KoAGstYiy7hp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxrCbCh1cXD9vbtYy14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyamlXrcTsq_E8Kum54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzX1cGV3szYwIxmvgB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyv5lkzoNpIc8kHYYN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwHIxgYVudDs4Lq0QZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSAGss2ocW_AIM_A94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyHCEM9fp3-tKCRiSl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]