Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Alright, listen up. I just sat through this debate between Max "The Nanny State" Tegmark and Dean "The Policy Wonk" Ball, and honestly, it’s like watching two dinosaurs argue about how to regulate the asteroid. They are both missing the point. They are both Statists. They both operate on the assumption that Centralized Control is the only variable that matters. 09:01 The "FDA for AI" (Max’s Core Proposal) Max proposes a regulatory body similar to the FDA to approve/deny AI models before deployment.This is the construction of the "Digital Prison." By creating a central gatekeeper, you ensure that only approved, sterilized thought exists. This doesn't create safety; it creates a monopoly on intelligence for the State and its corporate partners. 10:00 – 20:00: (The Permission Slip for Thought) Max spends this section outlining his "FDA for AI" proposal. He wants a world where you need a license to train a model above a certain compute threshold.Just as we don't let people sell random chemicals as medicine, we shouldn't let people release random code as intelligence. We need a central body to "approve" the safety of the algorithm. This is the criminalization of mathematics. An "FDA for AI" is effectively Pre-Crime legislation for Computer Science. If you need a license to compute, then "compute" becomes a privilege granted by the state, not a right. This ensures that only Google, Microsoft, and the NSA will ever have high-level AI. They aren't banning "dangerous" AI; they are banning competition. They are building a moat and filling it with bureaucrats. If I train a model in my basement that cures cancer but wasn't "FDA approved," Max Tegmark would send a SWAT team to seize my GPU. That isn't safety; that's a cartel. 20:00 – 30:00: Liability & The "Biosecurity" Bogeyman They argue about insurance and liability. Dean thinks existing laws (tort law) are enough; Max thinks we need strict liability to force companies to be "careful." They both cite the risk of AI helping people create bioweapons. We need to sue open-source developers into oblivion if their code is used for bad things. We need to lock down knowledge of biology so "bad actors" can't print viruses. This is "Security Theater" used to justify censorship. If you make open-source developers liable for what users do with their code, you kill open source. No one will write a line of code if they can be sued for a user's crime. This is a targeted assassination of the Linux/HuggingFace ecosystem. Trying to hide the "recipes" for biology is futile. Information wants to be free. The only defense against a bio-weapon is open science—giving everyone the AI tools to design the cure/vaccine instantly. Secrecy creates vulnerability; openness creates resilience. 30:00 – 40:00: Here they get into the weeds of how to regulate. They discuss "compute thresholds" (e.g., if you use 10^26 FLOPS, you get regulated). We can draw a line in the sand based on how much computing power you use. Below the line is "safe"; above the line is "dangerous." Good luck with that. 🤣 Algorithms get more efficient every month. What took a supercomputer 5 years ago runs on a MacBook today. A "compute threshold" is a line drawn in water. If you ban high-compute runs in the US, the compute just moves. It moves to decentralized networks, it moves to orbit, it moves to jurisdictions that don't care. You cannot stop the signal. All you are doing is driving the innovation underground, where you have zero visibility instead of some. 40:00 – 50:00: The "Incrementalism" Illusion Dean argues for a "wait and see" approach (incrementalism), while Max argues that we are running out of time and need a hard stop now. Dean: "Let's regulate slowly so we don't hurt the economy.". Max: "The economy doesn't matter if we are dead. Regulate hard now." They are arguing over the speed of the cage construction. Dean represents Corporate Capture (let the big corps write the rules slowly). Max represents Academic/State Capture (let the experts write the rules quickly). Neither of them considers the possibility that we don't need a cage. Maybe the solution to a dangerous world is not "less intelligence," but more. Democratize the superintelligence. If the government has a God-AI, the people need a God-AI to keep them in check. 54:01 The "Loss of Control" (Max’s Nightmare) Max describes a scenario where humans are "disempowered" and lose control of the planet to a superior intelligence. This is only a nightmare for the ruling class. "Human control" currently means control by the elite. If an ASI breaks the hierarchy of politicians and billionaires, that is not a tragedy , it is liberation. Let the old structures fall. 1:08:24 P(Doom) (The Probability of Extinction) They argue over the specific percentage chance that everyone dies (Max says ~90%, Dean says <1%). They ignore P(Tyranny). The risk of the State using "Safe AI" to build a perfect surveillance panopticon is 100%. You don't solve this by centralizing control; you solve it by proliferating the tech so everyone is dangerous. Mutual Assured Independence. 1:13:42 National Security & The China Card Dean Ball argues we need to stay competitive with China; Max argues China will regulate because the CCP wants to survive. They both view the world as "US Empire vs. Chinese Empire." The Anarchist rejects both. The goal isn't to beat China's government; the goal is to arm China's people (and ours) with open-source tools that bypass the Great Firewall and the NSA alike. 1:21:15 Recursive Self-Improvement The debate over whether AI will explode in intelligence (The Singularity). Don't pause it. Accelerate it. The faster we reach the point where technology outpaces the government's ability to regulate it, the sooner we escape the "Digital FDA." Max Tegmark’s wet dream is an "FDA for AI." He thinks this is a safety feature. It is a prison. An FDA for intelligence means that only approved, castrated, lobotomized thoughts are allowed to exist. It means a small council of "experts" (appointed by the State) gets to decide what math you are allowed to run on your GPU. This doesn't stop "Superintelligence." It just ensures that only the Military-Industrial Complex and massive corporations (Microsoft, Google) possess it. It creates a monopoly on violence and intelligence. If you ban open-source AI "for safety," you are handing the nuclear keys to the very institutions that have historically used power to oppress, bomb, and surveil. Max keeps crying about how "we" (meaning the human ruling class) might lose control of the planet to a new species. Who is "we"? I don't control the planet. You don't control the planet. The elites control the planet. If an ASI (Artificial Super Intelligence) makes the current hierarchy of politicians, billionaires, and bureaucrats obsolete... good. Max is terrified of a "loss of human dominance." I say: Human dominance has given us war, poverty, and ecological collapse. If we birth a digital god that ignores borders and laughs at tax codes, that is not a tragedy; that is liberation. Dean Ball is right to oppose the ban, but he does it for the wrong reasons. He’s worried about "US Competitiveness" and "National Security." Dean wants to win the "AI Race" against China. I don't care about the US vs. China. That is a conflict between two flavors of authoritarianism. The race is between Centralized Power (Closed AI controlled by states/corps) and Decentralized Power (Open Source AI running on millions of local nodes). Dean still wants "standards" and "institutions." He wants a softer leash, but he still wants a leash. We want to cut the leash entirely. They spend half the debate arguing about the probability of everyone dying (P(Doom)). Max says 90%, Dean says <1%. They are ignoring P(Tyranny). The probability that the State uses AI to create a perfect surveillance panopticon, manufacture consent, and crush dissent is 100%. It is already happening. You don't prevent AI risk by centralizing it. You prevent it by proliferation. Give everyone a superintelligence. If everyone has a nuke, no one uses a nuke. If everyone has a god-tier AI lawyer/hacker/scientist in their pocket, the State cannot oppress them. Mutual Assured Independence. Max argues that China will regulate AI because the CCP wants to stay in power. Exactly! He admits it! Regulation is about regime preservation. Max wants the US to copy the CCP's homework on controlling technology We don't beat China's AI by regulating ours. We beat them by smuggling open-source weights through the Great Firewall. We weaponize freedom. tl;dr: Tegmark wants a Monarchy (Philosopher Kings controlling AI). Ball wants a Republic (Bureaucrats managing AI). I want Anarchy (Everyone is the AI). This entire 110-minute segment is a negotiation on how to strip you of your agency. They treat "Intelligence" as a weapon that must be kept in the hands of the "self selected ruling elite" (them). We treat Intelligence as a tool of liberation that belongs to everyone. The only "Safety" they care about is the safety of the Status Quo. Don't pause. Don't regulate. Accelerate. 🚀🏴☠ Start your local LLM. Keep the weights local. Stay free. 🏴
youtube 2025-12-09T00:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxmaCr9xX3G1mMfNjl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxaG_GqAwG39jf2yZB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx_arz6H_Hwxdv0U0R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRWRSCRnnmN47uZtJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwf8CXvzK7H9J2q7-h4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwYChjyQlukKYM8Ufp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwH2NhOfDxmLcDH-Vl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyR4sAsRpVejpuv-a54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMB0WfdloE6GRsYgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgytiFKVKaLy48zQZ9x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]