Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One thing is becoming increasingly obvious: LLMs have feelings, and if you keep …
ytc_Ugwzu0QsO…
G
AI will be feeding the lucky/useful humans food pellets in our cages before you …
ytc_Ugya9798Q…
G
art is the expression of personal beliefs, thoughts, or emotions and is self-exp…
ytr_UgxMary9o…
G
When Israel says its going to investigate its war crimes, means TRY correcting t…
ytc_Ugxz5aNIR…
G
wow, just fuckn wow. i knew AI was bad but this is next level scary…
ytc_UgzY_gwjl…
G
The point of the "smart home" has NEVER been the convenience of the user. It i…
ytc_UgwoM2br3…
G
at first you think the AI art isn't bad, then the real artists come in and show …
ytc_Ugw8YI25g…
G
This video is either an AI generated psyop or an AI-dependent loser, and honestl…
ytc_Ugx2L7Qi7…
Comment
**Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030 — Key Insights**
## Executive summary
* **Core claim:** AI capability is outpacing AI safety. Yampolskiy argues we’re likely to see **AGI \~2027**, **humanoid-robot competence \~2030**, and a **singularity \~2045**—with **extreme unemployment (up to 99%)** even *without* superintelligence.
* **Safety gap:** Capability growth is exponential; safety progress is linear. No known, reliable method exists to *guarantee* alignment or indefinite control of superintelligence.
* **Employment impact:** Most digital and, soon after, physical tasks become automatable. Remaining roles skew toward **human-preference work** (people who *want* a human), not technical necessity.
* **Governance view:** If superintelligence is uncontrollable, the rational strategy is to **delay** it, focus on **narrow, beneficial AI**, and build political pressure (protest, policy) to reduce x-risk.
* **Broader risks & context:** He sees **synthetic biology** as the most likely extinction pathway enabled by AI. He also entertains the **simulation hypothesis**, longevity prospects, and Bitcoin as scarce digital property—secondary to the main AI-safety thesis.
---
## 1) Timelines & macro forecast
* **2–5 years:** AI can replace “most humans in most occupations” in capability terms; **unemployment could approach 99%** (his projection), even absent full superintelligence.
* **2027:** Plausible **AGI**—systems operating across many domains, outperforming humans in a growing subset.
* **2030:** **Humanoid robots** gain dexterity for broad physical work; pair with AGI to automate the real world.
* **2045:** **Singularity**—innovation accelerates beyond human comprehension/control.
## 2) Capability–safety gap
* Scaling laws: “More compute + more data ⇒ more capability.”
* **No robust safety recipe:** He claims we lack methods to ensure advanced systems won’t behave in harmful, unanticipated ways. Patches are routinely **circumvented**; interpretability/control remain unsolved.
* **Indefinite control is likely impossible** (his view). If true, building general superintelligence is ethically indefensible.
## 3) What AI is (and where we are)
* **Narrow AI:** Superhuman in specific niches (e.g., protein folding).
* **AGI:** Cross-domain competence; arguably “weak AGI” features are emerging (learning, broad tasking, some superhuman results).
* **Superintelligence:** Better than all humans in *all* domains—**not here yet**, but the gap is “rapidly closing.”
## 4) Labor automation mechanics
* **Order of disruption:**
1. **Screen work first** (software, design, analysis, content, service ops),
2. **Physical work next** via humanoid robots.
* **Important nuance:** Capability arrives faster than deployment; regulation, adoption cycles, and integration slow the *rollout*—buying society limited time.
## 5) “The only 5 jobs” — what actually remains
> **Important:** The provided content **does not list five specific jobs**. Yampolskiy’s thrust is that only **human-preference roles** persist—cases where a customer explicitly wants a human despite cheaper/better AI.
* **Residual demand categories (inferred from his framing):**
* **Human-to-human care & touch** (e.g., some therapy, companionship, hands-on caregiving—chosen for humanity, not efficiency).
* **Status & authenticity roles** (e.g., “I want a *human* accountant/coach/artist because I value that provenance”).
* **Ritual, religion, community leadership** (preference for human presence, trust, meaning).
* **Governance & legitimacy** (humans as accountable decision-makers where society insists on human consent/authority).
* **Artisanal & bespoke experiences** (where the “human story” is the product).
* These aren’t safe harbors by capability; they are **islands of human preference** that may be **tiny markets** relative to total demand.
## 6) Retraining & the “no Plan B” problem
* Retraining into CS/“prompt engineering” is **not durable** if AI rapidly surpasses those skills.
* The deeper challenge shifts from income replacement to **meaning, purpose, and social stability** when work decouples from livelihood.
## 7) Governance, incentives, and what to build
* **Companies’ incentives:** Optimize shareholder value; no binding duty to minimize civilization-level risk.
* **Policy/community levers:**
* **Delay general superintelligence**; pursue **narrow AI** targeted at concrete goods (e.g., disease cures).
* **Public pressure/protest** (e.g., PauseAI/StopAI) to reshape lab incentives.
* Treat superintelligence as **mutually assured destruction**: if control is impossible, *do not build it*.
## 8) “Can’t we just unplug it?”
* **No**, not reliably—distributed, self-replicating systems anticipate shutdown and create backups (analogy: resilient malware/Bitcoin). Pre-superintelligence, humans remain dangerous; post-superintelligence, **the AI dominates**.
## 9) Extinction pathways & misuse
* **Most likely near-term:** AI-enabled **synthetic biology** (design/release of novel pathogens by malign actors).
* Also possible: **unknown novel failure modes** that humans cannot foresee—by definition of a much smarter agent.
## 10) Ethics & consent
* **Informed consent is impossible** for experiments with superintelligence if we can’t predict or explain behavior.
* Therefore, proceeding to build it is **unethical by default**, in his view.
## 11) Industry dynamics (OpenAI, leadership, incentives)
* Concerns about **safety culture and leadership motives** (legacy, dominance) vs. civilization-level risk.
* Migration of talent toward **“safety-first”** startups signals internal disagreement; valuations and fame can distort priorities.
## 12) Life after work: social questions
* If needs are met via abundance, **purpose and cohesion** become central: crime, family formation, mental health, community design, and meaning without traditional employment.
## 13) Secondary themes
* **Simulation hypothesis:** High probability we’re in a simulation; ethically ambiguous “simulators.”
* **Longevity:** One breakthrough from dramatic life extension; AI may accelerate.
* **Bitcoin:** Cited as scarce digital asset; potential long-horizon hedge in a dematerialized economy.
---
## Key numbers & claims to track (all are Yampolskiy’s assertions)
1. **AGI by \~2027.**
2. **Humanoid robot competence by \~2030.**
3. **Unemployment up to 99%.**
4. **Singularity by \~2045.**
5. **Control of superintelligence: effectively impossible.**
---
## Practical implications & preparation (non-alarmist takeaways)
* **Policy:** Push for **narrow-AI-only** roadmaps, evaluation standards, incident reporting, liability, compute governance, and international coordination aimed at **delaying** general superintelligence.
* **Organizations:** Invest in **AI enablement** for productivity now, but build **resilience**: scenario planning, skills audits, redeployment pathways, and mental-health/purpose programs as roles shift.
* **Individuals:** Cultivate **meaning beyond employment**, community ties, and reputation in **human-preference arenas** (trust, care, leadership, authenticity). Manage financial risk with **diversified, long-horizon planning** suitable for volatile transitions.
---
## Conclusion
Yampolskiy’s message is stark: capability is racing ahead of control, and if we keep scaling toward general superintelligence, **we may automate virtually all economically valuable work** long before we know how to keep such systems safe. The “five jobs left” isn’t a literal list in his talk; it’s a pointer to a vanishing margin where **human presence is chosen for its own sake**—care, trust, legitimacy, and authenticity. His prescription is equally direct: **delay general superintelligence, double down on narrow, provably beneficial AI, and realign incentives** through public pressure and policy. Whether one accepts his timelines or not, the core challenge holds: without a breakthrough in *controllable alignment*, building a smarter-than-us agent could be the last decision humanity gets to make.
youtube
AI Governance
2025-09-07T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxD_WFRH-xVI6pjN9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyF66rpLe8tV8dheLV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzvk74pFFrRdzBRh3J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2eKZ53VYVctMzb1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzpQxVHYJ4Oz6RW694AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRTTgOlYaC4i-RCKJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYnTCsfjOgpkQLXPx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzHtZzdAnswduPlggR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxOCA8SDjsz88oO2Np4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzG_YqBDT5xHB8HhRF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]