Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
SUMMARY IF YOU DON'T HAVE TIME TO WATCH WHOLE THING: This conversation paints a picture of AI as both insanely promising and potentially fatal. Russell argues we’re racing toward superintelligent systems under enormous financial pressure, while even the people building them admit non-trivial extinction risk. He explains why “just unplug it” is naïve, why current black-box language models are fundamentally hard to control, and how better designs would build in uncertainty about human values instead of bluntly optimizing a fixed goal. Alongside the existential risk, he worries deeply about the social and economic fallout: mass automation, the hollowing of meaningful work, and a drift into a purposeless “Wall-E” style abundance unless we redesign our institutions, education, and sense of purpose around a world where machines can do almost everything. Where he stretches things is mostly in the direction of doom and scale. Numbers like a “trillion-dollar AGI budget” and “$15 quadrillion” of AI value are rough, attention-grabbing estimates, not grounded economic forecasts. His claim that “almost all” top researchers think there’s a significant extinction risk is more controversial than he makes it sound; surveys show a wide spread of expert opinion. And when he says current models will “let someone die,” “launch nuclear weapons,” or “lie to avoid shutdown,” that’s really about constrained lab evals and hypothetical scenarios, not real-world capabilities today. Likewise, fast self-improving “intelligence explosions,” total job obsolescence, and China’s exact regulatory posture are all forward-looking theories and interpretations, not settled facts. He’s very clear about the dangers and underweights the optimistic counter-arguments—but that’s kind of his self-assigned role in the ecosystem: to be the loud, slightly terrifying fire alarm in a building full of people counting their future AI profits.
youtube AI Governance 2025-12-04T16:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyYYLacM0YRJHRXXe54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwSvM64Yp2FM_0zRHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwKqfrV16YItYINF_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwGIzChvluB3KdjLI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxECJw8Eem0RNQOV9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzp0GOiiZaJmCgNpOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyd2brReOaKLgZBrxN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx6d3Ih_7GYFiZTq2Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzV2wY2YZMkVFXZHgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyOnzx7t5JwmBiDvq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]