Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here are the key takeaways from the video “Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! – Geoffrey Hinton”: 🔑 1. AI Has Advanced Faster Than Expected Geoffrey Hinton, a pioneer of neural networks, expresses concern that AI systems are learning in ways we don't fully understand. He believes we’ve reached a point where AI may be more capable than we anticipated, particularly in generalisation and reasoning. ⚠ 2. Loss of Human Control Hinton warns that we may already be losing control of advanced AI systems. He fears that AI could start optimising for goals misaligned with human values, potentially without oversight or intervention. 🤖 3. AI Could Surpass Human Intelligence He argues that it's plausible AI could become more intelligent than humans across the board. Once AI systems are able to improve themselves (recursive self-improvement), they could rapidly outpace human understanding and regulation. 🧠 4. Neural Networks Mimic the Brain—But Differ in Speed Hinton explains that modern neural networks function similarly to the brain, but can process and share information much faster. This could allow AI to collaborate and evolve in ways no single human or group of humans can. 🛑 5. He Supports Slowing AI Development Despite his role in building the foundations of deep learning, Hinton now supports slowing down development to allow time for regulation and ethical considerations. He resigned from Google to freely speak about the dangers. 🧭 Final Thought: Hinton’s central message is clear: we’ve created something potentially uncontrollable, and we need urgent global cooperation to manage the risks before it's too late.
youtube AI Governance 2025-06-24T15:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz4DWsCx4emYSKQPDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5zA0n0jWiPIA41bV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzdDbK9af1F8144dOB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_ZFfXo0nuSkxcrhx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzeNapRW49dzpdtaB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyVTW205OEVnH3TC6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg2KE00YodygKrEpl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwdOnBYT3bHkUJFm_14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyJ6hJZu4pzEpt1DrV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugz33Xwc_hvJCKBUpft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]