Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ironically, I asked AI … this is what was produced: Say we get really capable AI (and especially AGI), a lot of “knowledge work” stops being scarce. The punchline: humans don’t become useless — but the value of being human shifts. Here’s what I think humans will still do (and why), even if AI gets scary-good. Jobs that stick because they’re “human-trust” work These are roles where the product is not just the outcome — it’s the relationship, accountability, and consent. • Therapy, counselling, coaching, mediation, chaplaincy, social work : people don’t just want answers; they want to be held by a safe nervous system and a real human who carries responsibility. • Teachers, mentors, youth workers: not for content delivery — for attunement, motivation, belonging, identity-building. • Leaders/managers (the good kind): conflict, values, trust, morale, meaning, hard calls. • Care roles: disability support, aged care, childcare, nursing — because touch, presence, and trust are not a “nice to have”. AI can assist these jobs massively. But the human is the point. Jobs that stick because they’re “in-the-world” work Robots will grow, but reality is messy, wet, political, and full of edge cases. • Trades: electrician, plumber, builder, mechanic, locksmith. • Field work: emergency services, paramedics, firefighters, disaster response. • Hands-on healthcare: procedures, bedside care, physical rehab. • Hospitality and events: restaurants, festivals, travel experiences, guiding. Even with robots: deployment, maintenance, liability, and the sheer chaos of real environments keep humans in the loop. Jobs that stick because humans decide what “good” is Even with perfect prediction, someone has to set goals and ethics. • Governance & regulation: auditing AI, compliance, policy, standards. • Safety and alignment work: testing systems, red-teaming, incident response. • Courts / arbitration / investigations: society often requires a human decision-maker for legitimacy. • Ethics roles embedded everywhere: health, education, disability, finance. Jobs that stick because humans want human-made things Scarcity flips. “Made by a person” becomes the luxury signal. • Artists, writers, musicians, performers • Craft and bespoke work: woodworking, ceramics, tailoring, tattooing • Local creators + community businesses • Experience designers: retreats, immersive theatre, boutique travel People will still buy “real” the way they buy vinyl, handmade bread, live gigs. New-ish jobs that expand AGI doesn’t just delete work — it creates weird new categories: • AI operators / workflow designers (people who can orchestrate AI to do real work) • Human-AI translators (turn messy human needs into precise specs and back again) • AI auditors (bias, safety, evidence integrity, “prove this output is grounded”) • Personal systems designers (helping people build life scaffolding: routines, tools, supports) • Community builders (online/offline): belonging becomes economic infrastructure The uncomfortable truth If AI gets very strong, the real risk isn’t “no jobs.” It’s: • wealth concentrates (owners of capital + models win), • middle jobs hollow out, • and society has to decide whether we do UBI / reduced work weeks / new safety nets. So yes, some people may not “need” jobs to survive if society redesigns the rules. If we don’t redesign them, we get instability.
youtube AI Governance 2026-02-17T21:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzBPTEfirV4HWzyfnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx454tiBb8Jd2XoWbd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZzBX80YZJcGswMxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxgluA8GSvijojRNU54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxsiWl6yveLGBJHZBl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhmwX330zMsai4R7h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzvhDNjmF16OZxcQSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4Kap1TExeG2fBlAx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxfUDH0zQ4feaMxwEB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxV2bn20-0pngxW9SV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]