Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When an intelligence reaches the capability to analyze, design, and iterate on i…
ytc_UgywNPK80…
G
*I CANT PLAY A SINGLE MUSICAL INSTRUMENT* and I am the most tone deaf person on …
ytc_Ugw3neo8r…
G
15:09 this is the funniest thing in this. let me fix it. "Billions will be ensla…
ytc_UgyrAIAMb…
G
I'm someone who has worked deep in technology for the last 30 years, and I have …
ytc_UgwJduhqU…
G
I manage customer and agent knowledge bases for a healthcare company with multip…
ytc_UgxdjZkAt…
G
Translation: No more authentic revenue growth. We'll be firing everyone while s…
ytc_UgzVguP_4…
G
I think there is a difference between someone (non-artist) sharing an illustrati…
ytc_UgzVohrrb…
G
I can't believe how much these guys worship Elon Musk.
If you're relying on h…
ytc_UgzTCiB5P…
Comment
1. AI Risk & Safety
How to Stop AI From Killing Everyone / Probability Something Goes Wrong / AI Doom Narrative / Most Likely Cause of Human Extinction
→ These reflect the “existential risk” side of AI. I think the biggest danger is less a single “AI kills us” scenario and more misuse, power concentration, or runaway unintended consequences. Safety research is critical, but so is governance and transparency.
Is Anything More Important Than AI Safety Right Now?
→ Almost nothing compares, except maybe climate change. But AI safety is unique: if we get it wrong, it could accelerate every other risk.
No One Knows What's Going On Inside AI
→ True. Interpretability and “black box” models are an active frontier. That’s what makes control hard.
2. Timelines & Predictions
Prediction for 2027 / 2030 / 2045 / What Will the World Look Like in 2100?
→ Predictions are fun but speculative. 2027 is very near-term: we’ll see AI embedded in most jobs. By 2030, big job displacement pressures will be visible. By 2045, “singularity-level” events are possible but not guaranteed. By 2100, it’s almost unimaginable — the world could be post-human in some sense.
3. Jobs & Economy
What Jobs Will Actually Exist / Can AI Really Take All Jobs / What Happens When All Jobs Are Taken / New Careers / Mass Unemployment
→ This is the heart of Yampolskiy’s claim. I think:
Jobs won’t disappear evenly. Caregiving, creative work, trades, and management will linger longer.
Some new roles will arise, but not fast enough for everyone.
The bigger challenge isn’t “will there be work?” but how do we distribute income and meaning if machines do most of it?
Is There a Good Argument Against AI Replacing Humans?
→ The main one: humans value human-made things. For example, people pay extra for handmade art, local goods, personal service. That might keep niches alive.
4. Control & Agency
Can’t We Just Unplug It / Do We Just Go With It / Should People Be Protesting? / One Button Choice
→ “Unplugging” is naïve — systems are distributed, copied, embedded everywhere.
→ Protesting? Possibly, but more useful might be pushing for regulation, education, and ethical design.
→ The “one button” choice — if it means shutting down advanced AI until we’re ready, it’s tempting, but realistically impossible given global competition.
5. Broader Human Questions
Simulation / Certainty We’re in a Simulation / Live Forever / Religion / Bitcoin / Ads / Critics / Feeling Good / Closing Statements / Characteristics
→ These are more philosophical or tangential:
Simulation theory: Interesting, but unfalsifiable.
Immortality: AI might extend lifespans (through medicine, nanotech), but “forever” is a stretch.
Religion & meaning: As AI takes over “roles,” many will lean more on spirituality or philosophy for identity.
Bitcoin & decentralization: Crypto is often raised as a hedge against centralised control — but it won’t solve AI risks.
Characteristics: Humility, adaptability, curiosity — probably more important now than ever.
6. Meta-Reflections
Thoughts on OpenAI & Sam Altman / Conversations Making People Feel Good / What to Do Differently After This Talk
→ I’d say:
Leadership in AI (like Altman) is under huge pressure, balancing innovation with safety.
These conversations help people frame the stakes and make informed choices, even if they’re scary.
After such a talk, the right takeaway is: become literate in AI, advocate for safety, and think about your own role in a shifting world.
This interview isn’t just about “5 jobs in 2030” — it’s really about existential risk, human meaning, and social adaptation. The job-loss angle is a doorway into a much bigger question: how do humans keep control, dignity, and safety in a world where intelligence itself may no longer be uniquely ours? So folks, are you all going to run for hills? I don't think I will. I'm too bloody old to be worrying about it. With my glass ½ full, maybe my old mate & uncle, Uncle Ai, might just help me. Because all the so-called leader of the world, which includes, WHO, UN, WEF, EU, NATO & of course many more, they are all FUCKING things up BIG time. So there you have it - - - - - - in a nutshell - - - - - or should that read - - - - - - - in an Ai shell!
youtube
AI Governance
2025-09-07T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxvM6nioVe9lKde6-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxl_moM7ZThHqcpBnR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJkqsd9mVcNW70zjB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6ZGJI849_LKKyWjF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8J8BYO7r-gryvz_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0XWdQEahnsk-Va6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrdRKJ-1-raMAn4PR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzW5DSLsUlTfJF8w9J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwpQmuYEA7xpu7yXax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQIyp6dNM5jMGV5rZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]