Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
exactlyy, the people using AI and those not using AI are in the same boat if "AG…
ytr_Ugyy0sarh…
G
The video starts by saying AI will replace jobs and then says study suggest AI …
ytc_UgyUQpIqq…
G
My assistant was ***perfect*** for the last decade.
Now when I say "Get me di…
rdc_oht33ck
G
It's not life evolution
it Is intelligence evolución
And yes I say please and t…
ytc_Ugy7zEEAO…
G
Thank you for all your wonderful wisdom it really helped you said you wanted to…
ytc_UgzQjIazG…
G
I have a question though, devils advocate style. Humans all work from what's bui…
ytc_Ugz0qsPRx…
G
Machine learning only reflects the biases of those who created it. This isn't su…
ytc_Ugxd6TaxM…
G
I don't really agree with a lot of what you said but you hit the nail on the hea…
ytc_Ugyq3_vWX…
Comment
SUMMARY IF YOU DON'T HAVE TIME TO WATCH WHOLE THING:
This conversation paints a picture of AI as both insanely promising and potentially fatal. Russell argues we’re racing toward superintelligent systems under enormous financial pressure, while even the people building them admit non-trivial extinction risk. He explains why “just unplug it” is naïve, why current black-box language models are fundamentally hard to control, and how better designs would build in uncertainty about human values instead of bluntly optimizing a fixed goal. Alongside the existential risk, he worries deeply about the social and economic fallout: mass automation, the hollowing of meaningful work, and a drift into a purposeless “Wall-E” style abundance unless we redesign our institutions, education, and sense of purpose around a world where machines can do almost everything.
Where he stretches things is mostly in the direction of doom and scale. Numbers like a “trillion-dollar AGI budget” and “$15 quadrillion” of AI value are rough, attention-grabbing estimates, not grounded economic forecasts. His claim that “almost all” top researchers think there’s a significant extinction risk is more controversial than he makes it sound; surveys show a wide spread of expert opinion. And when he says current models will “let someone die,” “launch nuclear weapons,” or “lie to avoid shutdown,” that’s really about constrained lab evals and hypothetical scenarios, not real-world capabilities today. Likewise, fast self-improving “intelligence explosions,” total job obsolescence, and China’s exact regulatory posture are all forward-looking theories and interpretations, not settled facts. He’s very clear about the dangers and underweights the optimistic counter-arguments—but that’s kind of his self-assigned role in the ecosystem: to be the loud, slightly terrifying fire alarm in a building full of people counting their future AI profits.
youtube
AI Governance
2025-12-04T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYYLacM0YRJHRXXe54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSvM64Yp2FM_0zRHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKqfrV16YItYINF_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGIzChvluB3KdjLI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxECJw8Eem0RNQOV9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzp0GOiiZaJmCgNpOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyd2brReOaKLgZBrxN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx6d3Ih_7GYFiZTq2Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzV2wY2YZMkVFXZHgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyOnzx7t5JwmBiDvq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]