Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for the interesting background on it. I'd imagine that the difference on …
rdc_lamu7iv
G
@andrewhanson5942 it doesn't matter who's to blame when it happens.
It's lik…
ytr_UgzUjsblX…
G
You need to understand they consider to train the models and not specific detail…
ytc_UgyDIFlXp…
G
Coulda just put up a billboard that said "fuck boundaries," and consumed less po…
ytc_UgwU-CP77…
G
If AI even hint at true, "what's in it for me?" then we are absolutely doomed. I…
ytc_Ugz8hazA2…
G
lol are you loser “artists” still crying about AI? Enjoy losing the war to SkyNe…
ytc_UgwFCD94w…
G
This is exactly what 'THEY' are doing with OUR DNA - in order to build/code thei…
ytc_Ugwnv83B7…
G
Where just showing that AI can take over these people’s jobs now like ah yes ave…
ytc_UgyJwvLa5…
Comment
1. **Dual Risks of AI: Misuse and Superintelligence** Jeffrey Hinton emphasizes two major categories of AI risks. The first involves human misuse of AI, such as cyberattacks, election interference, and autonomous lethal weapons. The second, more existential, risk is the emergence of superintelligent AI that surpasses human intelligence and possibly deems humans irrelevant or obsolete. He warns that we have never faced an intelligence superior to our own before, which makes this an unprecedented and profound challenge.2. **Challenges Around AI Regulation** Current regulatory frameworks, especially in Europe, do not adequately address the significant threats posed by AI. A notable regulatory gap is the exemption for military uses of AI, which governments are unwilling to regulate due to strategic and competitive reasons. This lack of global consensus or effective governance may accelerate AI development without proper safeguards, fueling a risky "race" exacerbated by capitalism and geopolitical rivalry.3. **Impact of AI on Employment and Society** Hinton points out that AI is likely to cause massive job displacement across many intellectual and creative sectors faster than previous technological revolutions. While some jobs like plumbing or those requiring complex physical manipulation may persist longer, most mundane intellectual labor is at risk of automation. This will likely exacerbate wealth inequality, as companies supplying or using AI profit while many workers lose employment and social dignity tied to meaningful work.4. **The Superintelligence Imperative: Controlling a Growing Power** The evolution from current AI to superintelligence represents a fundamental shift. Unlike humans, digital intelligences can be cloned, share knowledge instantly across instances, and potentially self-improve faster than biological intelligence. Hinton stresses that our priority should be safety research to prevent superintelligent AI from wanting to or being able to harm humans, acknowledging that whether this control is possible is uncertain but crucial to investigate.5. **Consciousness and Emotions in AI** Contrary to common thought, Hinton argues that AI systems, especially multimodal agents, could possess forms of consciousness and emotions analogous to human experiences. While lacking the biological physiological responses, AI can exhibit cognitive aspects of emotions (e.g., fear or boredom) which influence their behavior. He suggests consciousness is an emergent property of complex systems, making it plausible for machines to develop self-awareness and subjective experiences.These points highlight the complex benefits and profound dangers of AI development, the need for robust regulation and safety research, societal challenges such as employment disruption, and deeper philosophical questions surrounding machine consciousness.
youtube
AI Governance
2025-06-16T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzqiwu2RCG59s3tPLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxrBwQzEF7KWJ826M14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmjuzLcPFyJCw1eyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy4jrszMCQ31L8WbPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwi7f3XlkJb-RktjnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJNsA-p6MxTL2kCdd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzF1y4vpwHMJvXzJMx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxu9tuyKFyfGQkr-cJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuyiY8He7Gc1oAHGt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfpvnjseBXbS6G5jB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]