Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thanks for sharing your insights sir! But here's my take on this there’s always gonna be things you need a human for. To deal with the AI’s gaps. It’s like dogs. If dogs could do everything else but pick up their own poop and bathe themselves, we’d still need humans for that. Same with AI it’ll handle almost everything, but there’s always that part where humans step in. So now you got 80 extra hours a month. What do you do with that? Travel. Create. Put energy into making the world better. “Jobs give meaning.” Nah bro. We’ve just been conditioned to think that way. Meaning comes from contributing to something you care about, something you’re passionate about. And bro the French bulldog analogy is perfect. Can’t believe you brought that up, because I had that dog analogy in my head before you said it. Despite their gaps, dogs and humans still exist together. So will Humans and AI! With superintelligence, humans might end up in the position dogs are in with us. Almost like companions, caretakers, maybe even sidekicks. Dogs don't understand the concept of work. But we can Dogs don’t resent being dogs. They live, love, and thrive inside that relationship. If we reframe right, humans would be just fine not as slaves to superintelligence, but as partners in a new kind of existence. There's no computing for soul. Cons? Identity crisis. If your whole sense of self was tied to “my job = my worth,” losing that will break people. Dependence. Just like dogs can’t survive without humans, humans could become overly dependent on AI. Power dynamics. Who controls the AI? If it sees us as dogs, best case we’re beloved pets. Worst case we’re strays. My final thoughts AI won’t make us dogs. It’ll either make us gods or parasites, depending on what choices we make now. The “pet to AI” future only happens if we surrender our role as creators and governors.
youtube AI Governance 2025-09-04T12:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZ8tMC_iqbPOFfAtl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_pxrzCH1EcQL0cxV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwk8b5su46CTSeOMfF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgybjfcIC2yLrPpEni14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyR7pTGIw_dmjOdOLB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbRDTdadWNPFQPkT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyVb5LRcxwOVS0leaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyy16gSnnDsJuigfCR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXdTycB1yy7VH_vIt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgztGwG7us49PyQQVKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]