Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This video features an in-depth conversation between Steven Bartlett and Dr. Roman Yampolskiy, a leading computer scientist and founder of the term "AI safety." The discussion focuses on the existential risks posed by the rapid development of superintelligent AI and the lack of effective control mechanisms. ### Key Takeaways & Predictions • The AI Threat: Dr. Yampolskiy warns that we are currently creating "alien intelligence" without knowing how to ensure it remains safe or aligned with human goals (2:42 - 3:43). He estimates that there is a high probability of catastrophic outcomes if these systems are not properly contained (4:36 - 4:56). • Job Displacement: By 2027-2030, Dr. Yampolskiy predicts a massive shift in the labor market, with up to 99% of jobs potentially being affected by AI automation (0:30 - 0:45, 11:32). He argues that we are moving toward a future where human contribution to traditional work is increasingly obsolete (12:04 - 12:51, 1:23:38). • The Race for Superintelligence: The guest is highly critical of major AI labs and leaders like Sam Altman, suggesting they prioritize competitive dominance and profit over safety, often violating the very guardrails once proposed for responsible AI development (1:04 - 1:24, 42:32 - 44:32). • Simulation Theory: Dr. Yampolskiy expresses near-certainty that we are living in a simulation, drawing parallels between the structure of religious beliefs and the concept of an engineer or programmer managing a created world (56:10 - 1:01:45). • The Path Forward: He emphasizes that AI safety is the most important issue facing humanity today—a "meta-solution" that could either resolve existing global crises or trigger extinction (28:51 - 29:54). He advocates for greater transparency, moral responsibility, and global efforts to ensure we stay in control (46:56 - 48:09, 1:21:36 - 1:22:00).
youtube AI Governance 2026-04-01T06:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwWtvqc2azvkfOnH6R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxs84aNJbmJrjrnJw14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugypx6OhtVd_Od-8oRt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzLmUP7UFlNumbuWAp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwdqkmUQXaABz0I3x94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzrGK6jFacOnX82VCd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzlAxh6ftq7q_Sg9Fp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0xK6x7upgOPX3qs94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIzGS52XEXeVA2XS14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJBqnOpuf2FtfYhdB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]