Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fun fact, most doctors had to learn how to diagnose certain conditions from medi…
ytr_UgwO-DmQe…
G
You talk about AI like it’s still the 1980s. AI isn’t some mysterious entity. it…
ytc_Ugy9V9c_T…
G
For someone who is meant to be a scientist, I don’t detect much scientific rigou…
ytc_Ugy3Umimy…
G
I say let's give AI a try. If what's being filmed today is written by humans, th…
ytr_Ugypj_Sry…
G
Wasn't this Alliance suppose to protect artists copyright ? wtf are they using A…
ytc_UgwwW5XBq…
G
The graph is the political compass except there are pictures,authoritarian is hi…
ytc_Ugyu4esOy…
G
Tell me how AI can destroy worlds when they literally save or deliver wrong answ…
ytc_UgzB6enrm…
G
Oh. We are going to see so many of these in the near future considering they hav…
ytc_Ugzw7Yzin…
Comment
“Train to be a plumber”: Manual trades are safe because robots lack dexterity? Autonomous inspection/repair bots are already crawling inside pipes (University of Sheffield’s Pipebots) and commercial sewer-maintenance robots are rolling out in cities worldwide. These are niche today, but they show that even “pipe work” is on the automation roadmap. 2025 is shaping up as the first year that humanoid factory workers (Atlas, Digit, Figure 01, Tesla Optimus) move from demos to paid pilots; investors expect a $38 billion humanoid-robot market by 2035. If a robot can stock shelves and move freight, servicing a water heater isn’t far behind. Physical dexterity is temporarily protective, but advising young people to avoid all cognitive work in favour of plumbing underestimates the speed of embodied AI.
“AI will cause mass joblessness”: Every general-purpose technology has shifted tasks faster than it eliminated work. After the ATM, bank-teller employment rose for a decade because branch formats changed and demand for financial services expanded. When a Fortune-500 call-centre rolled out a GPT-powered copilot, productivity rose 14 % and turnover fell. No net layoffs were observed during the study window—and low-skill workers benefited most. The World Economic Forum’s Future of Jobs 2023 survey of 803 companies projects 69 million new jobs and 83 million displaced by 2027—net shrinkage of just 2 %. Most employers see generative-AI adoption as a job creator when paired with reskilling. Only 12 % of U.S. jobs are at high automation risk; exposure is largest in well-paid metro occupations that can absorb churn through up-skilling. Large churn is coming, but “collapse of work” is not a foregone conclusion—especially if policy focuses on skills portability and wage insurance instead of blanket pessimism.
“Super-intelligent AI may wipe us out”: Andrew Ng, Yann LeCun, Demis Hassabis and other senior scientists publicly call extinction fears “over-hyped,” emphasising that today’s models still lack persistent planning, causal reasoning and agency. Training GPT-4-scale models already strains the global GPU supply and draws power comparable to a mid-size town. Cloning weights instantly (Hinton’s “digital immortality”) does not clone the datacentres, electricity or fine-tuned data needed to keep those copies useful. Interpretability techniques, constitutional fine-tuning and model “rulebooks” (Anthropic, Google DeepMind, OpenAI) are advancing faster than the systems themselves did a decade ago. That doesn’t eliminate risk, but it shows safety research can scale with capabilities. Existential risk is a research topic—just not the single most likely outcome. Betting society’s response on a worst-case estimate (Hinton’s own “10–20 % chance”) risks neglecting the 80-90 % space where opportunity and manageable hazards coexist.
“Regulation is too weak—especially in Europe”: Hinton is right that the EU AI Act exempts systems used exclusively for defence-sector purposes, but Civil-market GPAI models, biometric surveillance, deep-fake tools and dual-use systems are covered by strict risk tiers, mandatory transparency and multi-billion-euro fines. The Act also sets a precedent that other democracies (Japan, Brazil, Canada) are actively emulating—so a governance vacuum is not pre-ordained. Meanwhile, the newly adopted WHO Pandemic Agreement explicitly targets lab-biosecurity and genetic-synthesis monitoring, closing some of the loopholes Hinton fears around AI-enabled bioweapons. The regulatory glass is half full; the debate should focus on closing defence loopholes, not declaring the whole framework “crazy.”
Deep-fakes and targeted manipulation are exploding; guardrails for political advertising lag badly. Autonomous weapons merit a Geneva-Convention-style treaty; here the EU opt-out is indeed a problem and inequality will widen unless governments channel productivity gains into portable benefits, wage top-ups and continuous adult education. Geoff Hinton earned his place in AI history—but his most dramatic public statements lean toward worst-case speculation and overlook counter-trends already visible in the data. A more balanced agenda would accelerate safety science and deployment of beneficial applications (healthcare, climate, education). Invest in human capital so workers ride the AI productivity wave instead of being swamped by it. Close genuine governance gaps (autonomous weapons, election integrity, bio-risk) without freezing broad innovation. If we do those things, the future workforce may include plumbers and prompt engineers—and plenty of jobs we haven’t named yet.
youtube
AI Governance
2025-07-13T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwI8401QxQtTVy7zYd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgypXVPlfnDTfb_cRL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxaBNGfZmBWGR4fHXp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzC-zxMMArpFOloiAZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz5lgg4rDgNbxm5uQF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyd_8xpVdVkQLYjoMt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxP8m-6IR1DRdRcoe54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysoBVVJShlqXzrSkh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuQlzIqwyKpaMMrXJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwq2y586MupWhvbGCh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]