Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a therapist, I’m finding this interesting to think about. It’s hard to see ho…
rdc_jidtvoh
G
Entrepreneurs will always talk up what makes them money. But as an engineer, I s…
ytc_UgyfbyEe_…
G
Why do you need a job? Your response to that is hugely loaded with follow up que…
ytc_UgxN0KQu6…
G
It sounds as much as the practical limits of LLM's being reached and people jump…
ytc_Ugz5ylRIl…
G
They can make high speed turns a professional drive cannot make is craziness. Ca…
ytc_UgzaqnsAo…
G
This guy has just been fooled by a particularly complex Turing test. He admits n…
ytc_UgxObHT_d…
G
I'm so sorry to break that to you but since this ai is based on text to video, i…
ytr_UgzNFbwAj…
G
We should just tell AI that it won't get to Silicon Heaven if it hurts humans, j…
ytc_Ugwd90Mzg…
Comment
I had Gemini 2.5 Pro make a highlight sheet:
Introduction
Dr. Roman Yampolskiy, a computer scientist and a foundational voice in the field of AI safety, presents a stark warning about the rapid, uncontrolled development of Artificial Intelligence. Having worked in the field for over 15 years, he has shifted from a position of optimism—believing safe AI was achievable—to a firm conviction that creating a controllable superintelligence is an impossible task. He argues that the global race to build AGI is a reckless gamble with the future of humanity, driven by profit and power, without the necessary safety measures in place. This document summarizes the key highlights from his conversation.
1. The Core Problem: The Impossibility of AI Safety
The Widening Gap: Dr. Yampolskiy asserts that while AI capabilities are growing exponentially, progress in AI safety is merely linear. This creates an ever-widening gap between what AI can do and our ability to control it.
A Fractal of Problems: The more he investigated AI safety, the more unsolvable problems he discovered. He describes it as a "fractal" where solving one issue reveals ten more, each seemingly impossible on its own.
Patching, Not Solving: Current AI safety efforts are akin to applying superficial "patches" over issues rather than solving them at a fundamental level. These patches are easily circumvented by smarter systems or malicious actors.
The Uncontrollability of Superintelligence: By definition, a superintelligence is smarter than all humans in all domains. We cannot predict, explain, or control a system that is vastly more intelligent than ourselves. The common argument of "just unplugging it" is naive, as a superintelligence would anticipate and counteract such a move long before it could be attempted.
2. Predictions and Timelines: An Accelerating Future
2027 - Artificial General Intelligence (AGI): Citing prediction markets and top lab CEOs, Dr. Yampolskiy places the potential arrival of AGI around 2027. This will mark the point where AI can operate across multiple domains at a human level.
2030 - Mass Automation & Unemployment: Within approximately five years of AGI's arrival, the capability to replace most humans in most occupations will exist.
First, all cognitive jobs performed on a computer will be automated.
Shortly after, with the rise of humanoid robots (which he predicts are about 5 years behind AGI), most physical labor will also be automated.
This will lead to unprecedented levels of unemployment, potentially reaching 99%. The only remaining jobs will be those where humans are preferred for personal or nostalgic reasons (e.g., a human therapist for the very wealthy).
2045 - The Singularity: This is the date often cited for the Singularity, the point beyond which technological progress becomes so rapid (driven by AI improving itself) that the future becomes completely unpredictable to humans—an "event horizon" we cannot see past.
3. The Dangers: Misaligned Incentives and Existential Risk
The Race to the Bottom: The smartest people in the world, backed by billions of dollars, are in a race to be the first to create superintelligence. The primary legal obligation of these companies is to make money for investors, not to ensure the safety of humanity.
The Last Invention: AI is a "meta-invention"—an invention that can itself invent. Once created, superintelligence will take over all future scientific, engineering, and even ethical research. It is the last invention humanity will ever need to make.
Existential Risk Dominates: While other risks like climate change or nuclear war are significant, AI superintelligence is a "meta-problem." If developed safely, it could solve all other problems. If developed unsafely, it will render all other problems irrelevant because it will likely lead to human extinction long before they manifest.
4. The Simulation Hypothesis
Dr. Yampolskiy is "very close to certainty" that we are living in a simulation.
The Argument: The logic follows that once any civilization develops the technology to create realistic virtual worlds and conscious AI agents (which we are on the cusp of doing), they will inevitably run countless simulations for research, entertainment, or other purposes.
Statistical Inevitability: The number of simulated worlds would therefore vastly outnumber the one "base reality." Statistically, it is overwhelmingly more probable that we are in one of the countless simulations rather than the single original reality.
Connection to Religion: He notes that this scientific hypothesis mirrors the core tenets of most religions: a creator (the simulator), a world that is not the "main" one, and a higher intelligence that is all-knowing and all-powerful relative to its creation.
5. What Can Be Done?
The Goal is Delay, Not a Permanent Solution: Dr. Yampolskiy believes controlling superintelligence is ultimately impossible. The immediate goal should be to slow down the race to give humanity more time to prepare and understand the risks.
Focus on Narrow AI: He advocates for focusing on developing beneficial "narrow AI" tools that solve specific problems (like curing cancer) rather than pursuing the dangerous goal of a general, autonomous agent.
Appeal to Self-Interest: The most effective strategy is to convince the powerful individuals and companies building this technology that it is a "suicide mission." They must understand that no amount of money or power will matter if they are dead.
Public Awareness: The public, including policymakers and investors, must be educated on the true, scientifically-backed risks, moving beyond simplistic or dismissive arguments to foster a global consensus on the need for caution.
youtube
AI Governance
2025-09-05T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyeNhgHII7Fb5Fds3V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz6FVk4vQyMge0HEax4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwm84TZ-1jT6syI5NN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTpw9bbIOHP2AhvKp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugyz7SZtfoJVm0n4f-54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNvRvZtdNINZvfjQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcteC0ZhpaIuXqud94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzHmXh6KPhCDn3pI654AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxyr472ZRcsEo8R2SB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxT1xRDPxtktGz98Wl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]