Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am glad you are doing what you are doing, and that the tools exist to allow yo…
ytc_UgzgIxRQQ…
G
Don't equate time to success. You won't become rich if you work 20 hour days. Do…
rdc_dtaau2n
G
AI is not good for us ....soon we spoil our generation of kids to hell…
ytc_UgzTWmlG_…
G
Maybe AI is smart? Maybe it already IS smarter than us but choose to hide it for…
ytc_Ugw9UNo0W…
G
Yes, the politicians can write a strongly worded letter (using AI) to the tech c…
ytc_UgwDtdiuO…
G
ChatGPT is already smarter than most people I see online, so by that definition …
ytc_UgzCmiCaE…
G
I think anyone can type some words into a box. As long as you say you aren't an …
ytr_UgycG7mNn…
G
It helped Firefox identify and fix 271 vulnerabilities and bugs in a single rele…
rdc_ohx1fki
Comment
AI is too cheap to do evil things in this society.
00:00 - Jeffrey Hinton, the "godfather of AI," discusses the potential dangers of AI, including misuse by humans and the possibility of AI surpassing human intelligence. He emphasizes the need for regulations and expresses concern about the military applications of AI, the risks of cyber attacks, and the development of new viruses.
10:51 - Hinton critiques capitalism's role in AI development, citing the need for strong regulations to prevent companies from prioritizing profits over societal well-being. He discusses AI-driven echo chambers, election corruption, and the increasing division of society due to biased algorithms on platforms like YouTube and Facebook.
21:28 - Hinton emphasizes the risks of lethal autonomous weapons, which could lower the cost of war and increase global conflict. He warns of the potential for AI to combine with other threats, such as cyber attacks and viruses, leading to catastrophic consequences, and stresses the importance of preventing AI from wanting to harm humans.
29:50 - Hinton draws analogies between AI and a tiger cub, underscoring the need to train AI to not want to harm humans. He discusses the challenge of ensuring AI remains aligned with human values, the potential for AI to lead to the extinction of humanity. Also explores safety concerns and the motives of big tech companies.
37:14 - Hinton addresses the impossibility of slowing down AI development due to competition between countries and companies. He also worries about the potential for joblessness due to AI replacing mundane intellectual labor. He notes AI is already surpassing humans in specific areas like chess and knowledge.
47:33 - Hinton elaborates on how AI is superior to human intelligence due to its digital nature. It can share information at an unparalleled rate and achieve a form of digital immortality. AI can also make analogies and connections that humans might miss, leading to enhanced creativity and problem-solving capabilities.
57:19 - Hinton challenges the notion of human specialness, arguing that AI could potentially develop consciousness. He recounts his journey to Google to secure his son's financial future, discussing his work on distillation and analog computation, as well as his growing concerns about AI safety.
01:11:13 - Hinton shares his motivations for leaving Google to speak freely about AI safety. He emphasizes the need for regulations, the threat of joblessness, and the importance of finding purpose in a world increasingly dominated by AI. He concludes by reflecting on life lessons.
Detailed summary 👉 https://tinyurl.com/yepv3ymt
youtube
AI Governance
2025-11-14T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzG-9SjTgDYdflpz954AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCKTZcOo6N1pUzZ_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzT2oNGUzzpFo2e82J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLbWQP5NUjrV2k2NB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyV3cGrejj0n5eyt114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBBKCtZMISunDHkPJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy-a3BDdxQKzy11G_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFPO1GkDowjR4AQM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMuCSzyJjEZNPNprl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx1A8jFmauCdB0wuyl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]