Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dear Dr. Hinton, I agree with you about the degree of the danger. I disagree about the mode, that "they won't need us any more". I agree with your colleague about humans being in control. The issue is _what humans_ ! I think that use of the AI results, partly requested by humans with some level of control of the AI, can then be used by humans to endanger us all. For example the Star Trek episode of AI doomsday machines. They don't care about needing or not needing us, they just follow their methods to our detriment if the scenario was constructed. I do think that this level, use by those with destructive ends (whether intentional or unintentional), most dangerous either short or long term. (7:21 "People using AI" distinction as you aptly point out.) Computers from the start have been "more intelligent" than us at some specific tasks, first for example at doing multiplication. That range of capabilities of systems, some labeled AI, has of course been expanding and the range of task categories has vastly expanded with modern AI systems. What I think AI researchers do is what is called _anthropomorphize_ , to give too much of a consideration of human characteristics. You perhaps need 3 elements to get AI taking control: 1) Goal seeking capability that actually directs the behavior thusly. This area of research is very different from just the problem solving capability of current AI. 2) Access to hardware and energy sources to operate, even under opposition of humans. 3) Physical access to destructive means. I think that even #1 is far in the future before an AI could make such decisions in a way that affects a large population. (Sure, "HAL" in 2001, but that was limited scope.) Having all three elements is very much a human decision. Now take my example above, "doomsday machine", yes it does fit all 3, but having been specifically constructed with goal access and energy but not necessarily intelligence that does the real #1 operation of working harmoniously at first then later deciding that humans are not needed. A light switch does logic and "does not need us" but not what you mean, the "any more" part of the sentence. And yes, I just watched one of the early "War Games" movies, but find those unlikely scenarios, and if possible then not really matching your claim but rather the "doomsday" scenario I just described which is a different unfortunate human choice but not due to unlimited expansion of AI capability. (And nullifying those movies' plot theme of how to get out of trouble with the doomsday machine which is not really that intelligent, another example of anthropomorphizing.) I'm about 5 years younger than you, and was at university doing symbolic AI research at the same time and place as I. J. Good, but unfortunately did not to my recollection meet him. (First to propose "singularity" for those not familiar, a term used for the condition of concern here.) Would have been very interesting, his name came up regularly as someone of interest in our activities. We have indeed "lost control" in the sense that humans will continue the development unconstrained by regulations or normal social pressure, as it will occur somewhere. (I am certain if I had been in the scope of the Manhattan project, I would have opted to work on that, for example, regardless all the similar concerns. You are not going to get humans to stop working on AI because this is so interesting and possibly rewarding in individuals' cases.) Luckily I do my own plumbing.
youtube AI Governance 2025-06-17T02:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwxesgSMgI905QnLM14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwbpknyK4SIznDyDVB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw17LyX8IWCkYbziaZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxblqLS10epPVTBt3B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy4Jy5b5bfuC6iVjGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxo-xoUCL9BnB8TKAt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxXJ3Bn1SyIPZsmZZ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz7-GNzPeRcUflb5o54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfXPfRxTlpt5ASqJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyIjkuD0-3boodyuO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]