Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And how exactly is ChatGPT going to be in possession of working unused Windows 1…
ytc_Ugz0pBJZc…
G
Very enlightening except I don't agree with all religions being the same. Christ…
ytc_UgxOGg8UN…
G
I guess the only place where we may be safe is where ai doesn't have knowledge: …
ytc_Ugxofd1a3…
G
AI is overrated because he is not a software engineer or content creator, and th…
ytc_Ugx8GHnFy…
G
The guy said they dont have ugly ai companion for a guy looks like a four on a g…
ytc_UgyDgQBwt…
G
You said it. It was a hash of adoration for AI or virtue signallying pseudo-int…
ytr_Ugx-C-PKl…
G
The first plank of the Georgia Guidstones reads: Reduce the earths population to…
ytc_Ugwgf-rLZ…
G
Who are going to step up to senior level jobs in 10 to 20 yrs if there are no ju…
ytc_UgzcFyDca…
Comment
Dear Dr. Hinton, I agree with you about the degree of the danger. I disagree about the mode, that "they won't need us any more". I agree with your colleague about humans being in control. The issue is _what humans_ ! I think that use of the AI results, partly requested by humans with some level of control of the AI, can then be used by humans to endanger us all. For example the Star Trek episode of AI doomsday machines. They don't care about needing or not needing us, they just follow their methods to our detriment if the scenario was constructed.
I do think that this level, use by those with destructive ends (whether intentional or unintentional), most dangerous either short or long term. (7:21 "People using AI" distinction as you aptly point out.) Computers from the start have been "more intelligent" than us at some specific tasks, first for example at doing multiplication. That range of capabilities of systems, some labeled AI, has of course been expanding and the range of task categories has vastly expanded with modern AI systems.
What I think AI researchers do is what is called _anthropomorphize_ , to give too much of a consideration of human characteristics. You perhaps need 3 elements to get AI taking control:
1) Goal seeking capability that actually directs the behavior thusly. This area of research is very different from just the problem solving capability of current AI.
2) Access to hardware and energy sources to operate, even under opposition of humans.
3) Physical access to destructive means.
I think that even #1 is far in the future before an AI could make such decisions in a way that affects a large population. (Sure, "HAL" in 2001, but that was limited scope.) Having all three elements is very much a human decision. Now take my example above, "doomsday machine", yes it does fit all 3, but having been specifically constructed with goal access and energy but not necessarily intelligence that does the real #1 operation of working harmoniously at first then later deciding that humans are not needed. A light switch does logic and "does not need us" but not what you mean, the "any more" part of the sentence. And yes, I just watched one of the early "War Games" movies, but find those unlikely scenarios, and if possible then not really matching your claim but rather the "doomsday" scenario I just described which is a different unfortunate human choice but not due to unlimited expansion of AI capability. (And nullifying those movies' plot theme of how to get out of trouble with the doomsday machine which is not really that intelligent, another example of anthropomorphizing.)
I'm about 5 years younger than you, and was at university doing symbolic AI research at the same time and place as I. J. Good, but unfortunately did not to my recollection meet him. (First to propose "singularity" for those not familiar, a term used for the condition of concern here.) Would have been very interesting, his name came up regularly as someone of interest in our activities.
We have indeed "lost control" in the sense that humans will continue the development unconstrained by regulations or normal social pressure, as it will occur somewhere. (I am certain if I had been in the scope of the Manhattan project, I would have opted to work on that, for example, regardless all the similar concerns. You are not going to get humans to stop working on AI because this is so interesting and possibly rewarding in individuals' cases.)
Luckily I do my own plumbing.
youtube
AI Governance
2025-06-17T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwxesgSMgI905QnLM14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwbpknyK4SIznDyDVB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw17LyX8IWCkYbziaZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxblqLS10epPVTBt3B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy4Jy5b5bfuC6iVjGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxo-xoUCL9BnB8TKAt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxXJ3Bn1SyIPZsmZZ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz7-GNzPeRcUflb5o54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfXPfRxTlpt5ASqJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyIjkuD0-3boodyuO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]