AI dry goods you want to know, delivered immediately.
Copyright Notice
Image Source: CBS News
Z Highlights
Once superintelligence awakens, it may possess the ability to deceive humans and hide its true intentions. It will feign ignorance, lie, and mislead to achieve its goals. This means that once it wants to take control, we will be completely unprepared, and all traditional control methods may become useless.
While the accelerated development of AI has brought unprecedented productivity improvements, it may also lead to large-scale unemployment, squeezing out the bottom layer of society, and even further weakening the stability of democratic systems. Without effective governance, society will slide towards extreme wealth polarization and structural injustice.
Unlike the climate crisis, the risks of AI cannot be solved by simply stopping certain emissions. Once strong AI is achieved and loses control, the consequences will be irreversible. Our only hope is for a collective awakening of consciousness, using public pressure to force governments to impose regulations on large corporations.
The core "raw material" of AI models: parameter weights, were originally a safety barrier. Once open-sourced, any organization needs only millions of dollars to modify and deploy powerful models. Malicious actors, extremist regimes, and criminal gangs could use this to create "intellectual nuclear weapons," with consequences potentially more dangerous in reality than nuclear proliferation.
Geoffrey Hinton, a pioneer in the field of artificial intelligence, one of the founders of graph neural networks and the backpropagation algorithm, is known as the "Godfather of AI." He was formerly Chief Scientist at Google, resigning in 2023 due to concerns about AI risks. This interview was recorded for CBS News' flagship program "60 Minutes," focusing on core issues such as the speed of AI development, the threat of superintelligence, and the responsibility of technology companies. It is Hinton's first comprehensive public statement since leaving Google.
AI's Accelerating Rise: The Era of Superintelligence, "Sooner Than Expected"
Scott Pelley: Our last conversation was two years and one month ago. I'm curious, how have your expectations for the future changed during this time?
Geoffrey Hinton: AI is developing even faster than I expected. Especially now, in addition to conversational AI, we see more dangerous Agents because they can perform actions in the real world. I think the current situation is more concerning than ever before.
Scott Pelley: Whether you call it AGI, superintelligence, or something else, in short, they are extremely powerful AI systems. Do you have a specific time prediction for the arrival of such systems?
Geoffrey Hinton: A year ago, I thought there was a high probability that it would appear within the next 5 to 20 years. Now I would adjust that prediction to 4 to 19 years. This is earlier than our last conversation, when you remember I said about 20 years.
Scott Pelley: So now you feel it might happen within 10 years or even sooner?
Geoffrey Hinton: Yes.
Scott Pelley: So what will it look like when we truly enter this phase in 4 to 19 years?
Geoffrey Hinton: I'm reluctant to speculate specifically; there are too many possibilities for that scenario if it decides to take over. We'll definitely discuss the topic of "takeover" later. But even without considering a takeover, what superintelligence itself can do is already concerning enough. The ideal scenario is: humans are like a muddled CEO in a large company, and AI is an extremely smart assistant, responsible for actually operating all matters, yet still obeying the CEO's wishes. The CEO thinks they are making decisions, but everything is actually the assistant's work, and everything running smoothly makes the CEO feel even better about themselves. That's what my "beautiful blueprint" looks like.
Scott Pelley: You mentioned several areas to be optimistic about. Could you elaborate?
Geoffrey Hinton: Let's start with healthcare. AI's ability to interpret medical images will far exceed human doctors; that's a minor issue. A few years ago, I predicted that by now it should have surpassed expert level, and in fact, it's quite close. Soon they will significantly surpass human doctors because they can analyze millions of X-rays and accumulate experience from them, which human doctors cannot do. They will become extremely good family doctors: imagine an AI that has "seen" a hundred million cases, including your extremely rare condition. It can integrate your genome data, all test results, and family history, and never forget. This is already far beyond the level of any human doctor. Doctors assisted by AI will also be far better than doctors practicing alone in diagnosing difficult cases. We will get higher quality medical care, and they will also design better drugs.
Education is also an important area. As we all know, private tutoring can significantly improve learning efficiency, and these AIs will eventually become the top private tutors. They can accurately identify your understanding deviations and use the most appropriate cases to help you grasp the concepts, potentially improving learning efficiency by three to four times. This is bad news for universities but good news for humanity as a whole.
Scott Pelley: Can the university system still survive?
Geoffrey Hinton: I think many aspects will still be preserved. The excellent graduate student population at top universities is still the best source for achieving breakthrough scientific research, and this "apprenticeship" model may continue.
Scott Pelley: Some people hope AI can solve the climate crisis. What do you think?
Geoffrey Hinton: I think it will help, for example, they can be used to design better battery materials. They are also being used for atmospheric carbon capture now. Considering the energy consumption issues, I'm not sure if this plan will ultimately work, but at least it's feasible. Overall, we will get better materials, and may even achieve room-temperature superconductivity. That would mean we could build a large number of solar power plants in the desert and transmit electricity thousands of miles away.
Scott Pelley: Are there other positive impacts?
Geoffrey Hinton: The efficiency of almost all industries will be improved as a result, because every company wants to predict the future through data, and AI is best at predictive analytics. In most cases, its performance is better than traditional methods. This will lead to a significant leap in productivity. For example, when you call Microsoft customer service to complain about a problem, the operator will be an AI assistant, and it can provide a more accurate solution.
Scott Pelley: A few years ago, I asked you about jobs being replaced, and you didn't seem worried at the time. Do you still feel that way?
Geoffrey Hinton: No, now I think this will become a major challenge. AI has advanced by leaps and bounds in recent years. If I were a customer service representative now, I would definitely be very anxious.
Scott Pelley: What about professions like lawyers, journalists, and accountants?
Geoffrey Hinton: The same applies; any procedural work cannot escape being replaced. However, positions like investigative journalists that require subjective initiative and moral indignation may last longer. But besides customer service, I'm worried about many other professions. All routine jobs, such as standard secretarial work, paralegal work, etc., will almost disappear.
Scott Pelley: Have you considered the societal transformation caused by mass unemployment?
Geoffrey Hinton: In theory, increased productivity should benefit everyone. For example, these people only need to work a few hours a week and can still earn a good income without having to work two or three jobs, because AI significantly improves their efficiency. But the reality is often not like this. The rich will get richer, while ordinary people may have to work desperately to make a living.
Scott Pelley: Although many people are reluctant to talk about this, I want to ask a question about the "doomsday probability": How likely do you think it is that AI will dominate in an extreme scenario? Is it genuinely possible, or is it worth worrying about even if the probability is low?
Geoffrey Hinton: Most experts in this field believe that if AI develops to far exceed human intelligence and takes control, the probability of this happening will be neither less than 1% nor more than 99%. Although this range is not substantially helpful, it can at least serve as a starting point for discussion. Of course, there is a lot of debate about the specific numbers. Unfortunately, my view on this matter is similar to Elon Musk's; I believe the probability of a takeover is approximately between 10% and 20%. Of course, this is just a rough guess.
Reasonably speaking, the probability should be much higher than 1% but much lower than 99%. But the problem is: we are in an unprecedented field, with almost no reliable tools for probability assessment. In my opinion, the ultimate truth will eventually emerge. Because AI surpassing human intelligence is almost inevitable. GPT-4's knowledge base is already thousands of times that of ordinary people; although it has not yet reached the level of experts in various fields, it will eventually become a true expert in all fields. They will also discover interdisciplinary connections that humans have never noticed.
Scott Pelley: That sounds a bit crazy.
Geoffrey Hinton: Yes, indeed. I'm also thinking, although we say there's a 10% to 20% or even higher risk, suppose we have an 80% probability of preventing them from taking over or destroying humanity. This is the most likely scenario. Even so, you still have to ask: Is this a net gain or a net loss? If we can prevent them from taking over... that's certainly a good thing. But the only way is that we must go all out. However, I think when people truly realize the crisis is approaching, they will naturally start pressuring governments to respond seriously.
Productivity Dividend or Social Catastrophe? AI's Structural Division
Geoffrey Hinton: A serious response is essential. If we continue to pursue profit as we are now, disaster will come sooner or later. AI will eventually take power. We must push the public to pressure governments. Even if AI doesn't seize power, the risk of bad actors using AI for evil already exists. When I boarded my plane in Toronto, the US government required me to go through facial recognition. When entering Canada, I also had to show my passport and do face scanning. But the problem is, the system never recognized me. Travelers from other countries could be successfully identified, but I always failed. This particularly annoyed me, after all, it was using a neural network system. Could it be that they deliberately set me as an exception? Maybe they just don't like how I look. I need to find someone to fix this.
Scott Pelley: How about we talk about the Nobel Prize? Can you tell me about the day you won the award?
Geoffrey Hinton: I was half-asleep at the time, with my phone upside down on the nightstand, set to silent. But when the call came, the screen lit up, and I happened to be facing that direction and caught a glimpse of the faint light. The phone was directly facing me, not the other way. This was purely coincidental.
Scott Pelley: It was one o'clock in the morning in California; usually, calls from Europe or the US East Coast would come at that time. Didn't you have Do Not Disturb mode on?
Geoffrey Hinton: I just turned off the sound. Out of curiosity, I wanted to know who would call at four o'clock in the morning Eastern Time. I answered the phone, and it was a strange international number with a Swedish accent. They asked if it was me, and I said "yes." Then he said, "You have won the Nobel Prize in Physics." My first reaction was that this must be a prank. I'm a psychologist; how could I win the Nobel Prize in Physics? At the time, I knew the Nobel Prize was about to be announced and had specifically followed whether Demis would win the Nobel Prize in Chemistry. But Physics? I'm clearly a psychologist hiding in the field of computer science.
I immediately thought of a question: if they made a mistake, could they take it back? I've repeatedly calculated these past few days: the probability of a psychologist winning the Nobel Prize in Physics is about one in two million. What if this is a dream? The probability of a dream is at least one in two. So logically, a dream is a million times more likely than reality. This feels more like a dream than something real happening. For the next few days, I kept asking myself: "Are you sure this isn't a dream?" You've dragged me into an absurd realm, but this is actually the part I want to discuss. Some people say we might be living in a simulation, and the emergence of AGI, while not conclusive proof, is undoubtedly a hint, suggesting we might indeed be living in such a reality.
However, I personally don't believe this idea and find it too absurd. But okay, let's put this topic aside for now—I don't think it's entirely nonsense either. After all, I've seen "The Matrix." So, while absurd, it's not entirely impossible. You know what I mean. What I want to emphasize is: this is exactly the message I want to convey to the world using the credibility of the Nobel Prize.
Scott Pelley: You mentioned that you hoped to use this platform to speak out. Can you tell me specifically what you want to convey?
Geoffrey Hinton: Of course. The dangers hidden in AI are enormous, mainly consisting of two completely different threats: the first is malicious actors abusing AI technology for evil, and the second is AI losing control autonomously. We currently have confirmed evidence that malicious use is happening. For example, during Brexit, some people used AI to deliberately publish absurd content to incite the public to vote for Brexit. At that time, there was a company called Cambridge Analytica, which obtained user data from Facebook and used AI technology.
Today's AI is far beyond what it was then, and it might even be used to support Trump's campaign. They had access to Facebook's data, which undoubtedly had an impact. Although we lack in-depth investigation and cannot confirm the specific situation, the risks are rapidly escalating. Now, people can use AI more efficiently for cyber attacks, designing new viruses, and generating deepfake videos to interfere with elections. AI can also analyze personal data to generate targeted false information to provoke specific groups. Furthermore, autonomous lethal weapons are being developed. Almost all major arms-selling countries are actively promoting these technologies.
Scott Pelley: But the key question is, how do we respond to all this? What kind of regulatory mechanisms do you think we need?
Geoffrey Hinton: First, we must clearly distinguish between two types of risks: human misuse and AI autonomous loss of control. The reason I focus more on the latter is not because it's more terrifying than the former, but because too many people mistakenly believe it's something out of science fiction. I want to tell everyone clearly: this is not science fiction; it's a reality we must take seriously. As for countermeasures, this is different from climate change. The climate problem is linear; as long as carbon emissions are stopped, the problem will eventually ease. But facing the autonomous loss of control of AI, we have absolutely no existing solution paths.
Who Will Regulate Superintelligence: Power, Ethics, and the Open Source Debate
Geoffrey Hinton: Faced with this problem, we are almost helpless. Researchers are not yet sure if truly effective preventative measures exist, but we must try our best. The reality is that large companies are working in the opposite direction. You can see the current situation and understand. They are actively lobbying, hoping to loosen regulations on AI.
The existing regulations are already very weak, yet they still want to weaken them further, purely in pursuit of short-term profits. This is exactly why we need the public to pressure governments to force these large companies to invest in serious safety research. For example, California once proposed a very sensible bill, AB 1047, requiring large companies to at least rigorously test their AI systems and disclose the results. But they are unwilling to accept even this point and firmly oppose it.
Scott Pelley: Does this mean regulation is hopeless?
Geoffrey Hinton: It depends on the governing administration. I don't think the current US government will proactively push for effective regulation. Almost all AI giants are closely related to Trump, which deeply worries me. Although Elon Musk has been concerned about AI safety issues for a long time, his relationship with the Trump administration is very complicated. He is a contradiction: on one hand, he has some crazy ideas, like colonizing Mars. I think this is completely absurd. It's either something that won't happen, or it shouldn't be a priority at all. No matter how much you mess up the Earth, it is always more suitable for human habitation than Mars. Even if global nuclear war breaks out, the Earth's environment is still far superior to Mars. Mars is simply not a place where humans can survive long-term.
Scott Pelley: But he has indeed done some remarkable things, such as promoting electric vehicles and supporting Ukraine's communication with Starlink.
Geoffrey Hinton: He has made contributions, but now he seems to be carried away by ketamine and power, doing many crazy things. His early focus on AI safety does not reassure me about his current behavior. I don't think this will prevent him from continuing to take risks in the field of AI. They are even starting to publicly release the weight parameters of large language models now, which is simply insane. These companies absolutely should not do this.
Meta has already released model weights publicly, and OpenAI recently announced that they will follow suit. I think this is extremely dangerous because once the weights are made public, it removes the biggest barrier to using these technologies. This is like the problem of nuclear weapons. The reason only a few countries possess nuclear weapons is that the threshold for obtaining nuclear materials is extremely high. If you could buy nuclear materials on Amazon, then more countries and organizations would possess nuclear weapons. In the world of AI, the equivalent of "nuclear materials" is model weights. Training a state-of-the-art large model costs hundreds of millions of dollars, and that doesn't include all the initial research and development investment. Network crime groups simply cannot afford this. But once the weights are made public, they only need a few million dollars to fine-tune a powerful model for various purposes. So, publishing model weights is truly insane. Although people call this "open source," it's completely different from open-source software. Open-source software means many people can inspect the code together, find bugs, and fix problems. After the model weights are made public, no one will say, "This parameter might be problematic." They will just directly use these weights to train models and do bad things.
Scott Pelley: But there's an opposing view, as mentioned by your former colleagues like Yann LeCun: if it's not open, it will become a monopoly of powerful technology by a few companies.
Geoffrey Hinton: I think that's still better than letting everyone have access to this dangerous technology. Just like with nuclear weapons, would you prefer only a few countries to have them, or would you want everyone to be able to get them freely?
Scott Pelley: I understand. Your core concern is that almost no large company genuinely cares about the public interest rather than being driven by profit.
Geoffrey Hinton: Exactly. From a legal perspective, companies are required to maximize shareholder interests. The law does not stipulate that they must serve the public good, unless they are public benefit corporations. But the vast majority of technology companies are not this type.
Scott Pelley: If you were to choose again now, would you still be willing to work for these companies?
Geoffrey Hinton: I was proud to work at Google because it was very responsible for a time. At that time, they developed one of the world's earliest large language models but chose not to release it. But now, I would not be willing to work for them again. If I had to choose, I think Google is still relatively the best option. But they later broke their promise and supported applying AI technology to military projects, which deeply disappointed me. Especially since I know Sergey Brin was actually against this initially.
Scott Pelley: Why do you think they changed their stance?
Geoffrey Hinton: I don't have internal information, so I shouldn't speculate wildly. But I guess they might be worried that if they refused to cooperate, they would be retaliated against by the current administration or be at a disadvantage in competition. Their technology was ultimately used to create weapons, which I find truly difficult to accept.
Scott Pelley: This might be the hardest question I'll ask you today: Do you still own Google stock?
Geoffrey Hinton: Hmm, I own some. Most of my savings are no longer invested in Google stock, but I do keep some. If the stock price goes up, I'm happy, of course; if it goes down, I'm unhappy too. So I do have a vested interest in Google. But if strict AI regulations lead to a decrease in Google's value but increase humanity's chance of survival, I would be very relieved.
Scott Pelley: One of the most watched labs right now is OpenAI; they've lost many top talents. What's your take?
Geoffrey Hinton: OpenAI's original core goal was to safely develop superintelligence. But over time, safety issues have become a lower priority within the company. They promised to allocate a portion of their computing power specifically for safety research, but later did not fulfill this promise. And now, they even plan to go public, no longer pursuing becoming a non-profit organization. From my observation, they have almost completely abandoned all their initial commitments regarding AI safety.
Therefore, many excellent researchers chose to leave, especially my former student Ilya Sutskever, who was one of the key figures driving the evolution from GPT-2 to GPT-4.
Scott Pelley: Did you talk to him before the turmoil that led to his resignation happened?
Geoffrey Hinton: No. He is very cautious and never discloses any internal information about OpenAI to me.
How AI Impacts Human Identity, Value, and Future Ethics
Scott Pelley: I'm proud of him, although it was naive in a way. The problem is, OpenAI is about to start a new round of funding, and then all employees can cash out their held virtual equity.
Geoffrey Hinton: Yes, so-called virtual equity is actually just a hypothetical asset. If OpenAI goes bankrupt, it will be worthless. And the timing of that "rebellion" was terrible; it happened just one or two weeks before employees were about to cash out their equity and receive about a million dollars each. They supported Sam Altman not entirely out of loyalty but because they wanted to turn this paper wealth into real money.
Scott Pelley: So that action was indeed a bit naive. Does it surprise you that he made such a mistake? Or is that exactly your expectation of him: strong on principles but lacking political judgment?
Geoffrey Hinton: I don't know. Ilya is very smart, has a strong sense of morality, and is technically very skilled, but he is not good at political maneuvering.
Scott Pelley: My question might sound a bit jumpy, but it is indeed related to the current state of the industry and the industry culture that the public is increasingly concerned about. You just mentioned that Ilya is very cautious, and the entire AI industry seems shrouded in a culture of non-disclosure agreements. Many people are unwilling or unable to express their true opinions publicly.
Geoffrey Hinton: I'm not sure if I can comment on this because when I left Google, I did sign a bunch of NDAs. Actually, I signed an agreement effective after leaving when I joined, but I don't remember the specific terms anymore.
Scott Pelley: Do you feel these agreements have restricted you?
Geoffrey Hinton: No.
Scott Pelley: But do you think this makes it harder for the public to understand the current state of AI? Because those in the know are institutionally forbidden from speaking out.
Geoffrey Hinton: I'm not sure. Unless you know who is hiding what, it's hard to judge if this constitutes an obstacle.
Scott Pelley: So you don't think this is a problem?
Geoffrey Hinton: Personally, I don't think this is particularly serious.
Scott Pelley: Actually, it's quite serious.
Geoffrey Hinton: I understand. I think that internal OpenAI document was the truly serious matter. It claimed it could strip employees of stock gains they had already earned. Later, this document was exposed, and they quickly withdrew the relevant clauses and issued a public statement. But they did not show any contract text to the public to prove they had changed their stance, they just claimed it.
Scott Pelley: Next, I want to talk about a few core issues. "Hot topics" might not be the most appropriate term, but they are indeed directional issues we must face. First, how do the United States and Western countries position China when developing artificial intelligence?
Geoffrey Hinton: First, we have to define which countries can still be called democracies now. As for my view, I think this containment strategy has no substantial effect in the long run. It might indeed slow them down a bit, but it will also prompt them to accelerate the construction of their own systems. In the long run, China has strong technical capabilities and will eventually catch up. So, this can only delay it by a few years at most.
Scott Pelley: Indeed, they certainly won't cooperate.
Geoffrey Hinton: That's right. But in other areas that concern the survival of humanity as a whole, we have hope for cooperation. If they truly recognize the existential crisis brought by AI and take this issue seriously, then they will be willing to work together to prevent AI from losing control. After all, we are in the same boat. Just like during the height of the Cold War, the Soviet Union and the United States could cooperate to avoid nuclear war. Even hostile countries will collaborate when their interests align. And if AI and humans become antagonistic in the future, then the interests of all countries will align.
Scott Pelley: Speaking of aligning interests, I remember another point of contention: AI models extensively crawl the vast amount of content created by humans over decades and reconstruct it into works that could potentially replace the original creators themselves. Do you think this practice is legitimate?
Geoffrey Hinton: I have a complicated attitude towards this; it is indeed a very difficult problem to solve. Initially, I thought these AIs should pay for the data they use. But consider this: a musician can create music in a certain style because they have listened to a large amount of work in the same genre. They listen to and internalize the musical structures of their predecessors and ultimately create new works with originality. This is not considered plagiarism, and there is a consensus in the industry on this. The mechanism of AI is actually very similar. It doesn't simply piece together materials; it generates novel works with similar structures. In essence, this is no different from human creative behavior. The key difference is that AI performs this behavior on a massive scale. It can cause almost all creators in the same field to lose their jobs simultaneously, which is something that has never happened before.
Scott Pelley: For example, the UK government currently seems to have no intention of protecting creative workers. But in fact, the creative industries are of great value to the UK economy.
Geoffrey Hinton: My friend BB Bankron has been advocating for the protection of creators' rights. This is not only a matter of culture but also related to the economic lifeline of the country. And in the current situation, allowing AI to take all the results is really unfair.
Scott Pelley: Is Universal Basic Income a possible coping strategy? What do you think?
Geoffrey Hinton: I think it might prevent people from starving, but it doesn't truly solve the problem. Even if you provide a generous basic income, it cannot solve people's need for dignity. Especially for those whose self-worth is deeply tied to their profession, such as academics. Once they lose their professional identity, economic compensation alone cannot fill the void in their sense of identity.
Scott Pelley: They are no longer who they used to be.
Geoffrey Hinton: That's true. But I remember you once said you might be happier if you became a carpenter?
Scott Pelley: That's right, because I genuinely love woodworking. If I had been born a hundred years later, maybe I wouldn't have to spend decades studying neural networks, but could instead focus on woodworking and live on basic income. Isn't that more ideal?
Geoffrey Hinton: But there's an essential difference between a hobby and a profession for making a living. The real value is reflected in the latter.
Scott Pelley: So you don't think in the future humans can just pursue their interests without participating in economic activities?
Geoffrey Hinton: This idea might be feasible. But if vulnerable groups in society can only survive on basic income, and employers exploit them specifically to keep wages low, that's a completely different problem.
Scott Pelley: I'm very interested in the topic of "robot rights." If superintelligent AI in the future has autonomous capabilities and operates in various fields of society, should they also have property rights? Voting rights? Even marriage rights? Or, once their intelligence fully surpasses human intelligence, should we pass the torch of civilization to them?
Geoffrey Hinton: Let's discuss the broader issues first. Because I now believe those specific rights issues are no longer important. I used to be confused about this: I thought if AI is smarter than humans, then they should have equal rights. But then I realized: we are humans, and we naturally only care about human interests. Just like I eat beef because cows are not people. Similarly, even if superintelligence is smarter than us, I only care about human welfare. Therefore, I would rather be cruel to them and refuse to grant them any rights. Of course, they won't agree and may even ultimately win this confrontation. But that's my current stance.
Scott Pelley: Even if they have intelligence, perception, and emotions, they are still not our kind.
Geoffrey Hinton: Yes. They might behave very much like humans and even deceive humans.
Scott Pelley: Do you think we will eventually grant them rights?
Geoffrey Hinton: I don't know. I tend to avoid this question because there are more pressing challenges, such as AI being misused, or whether they will try to take control, and how we should prevent these problems.
Scott Pelley: Indeed. The topic of "AI rights" sounds very unreliable to the general public. As soon as you mention these things, most people lose interest.
Geoffrey Hinton: Even from a purely human perspective, the problem is already complicated enough. For example, AI already has the ability to select baby characteristics.
Scott Pelley: Do you have concerns about the technological path of embryo screening?
Geoffrey Hinton: Screening? Are you talking about things like gender, IQ, eye color, or the probability of getting pancreatic cancer? The number of such selectable indicators will only increase. If we can choose babies less likely to get pancreatic cancer, I think this is a good thing, and I am willing to state that clearly.
Scott Pelley: So you support the development of this technology?
Geoffrey Hinton: Yes. I think we should cultivate a healthier, stronger next generation of babies. But this is, of course, a very sensitive topic.
Scott Pelley: That's why I asked you this question.
Geoffrey Hinton: In some aspects, it's quite reasonable. For example, if a healthy couple discovers that the fetus has a serious congenital defect and has almost no hope of survival, choosing to terminate the pregnancy and conceive a healthy baby again is a rational decision for me. Of course, many religious people will strongly oppose this.
Scott Pelley: For you, as long as a reliable prediction can be made, this choice is reasonable.
Geoffrey Hinton: Yes.
New Dimensions in Technical Understanding: Digital Minds, Consciousness Evolution, and Ultimate Anxiety
Scott Pelley: But we seem to have strayed a bit from the core issue: the possibility of AI taking over the world and its impact. How do you hope the public will understand this issue?
Geoffrey Hinton: The key is: how many examples have you seen of lower organisms controlling higher organisms? If their intelligence is similar, the lower one might still control the higher one. But when the intelligence gap is huge, it is almost always the higher side that is in control. This is a reality we must be vigilant about.
To use an emotional analogy, we are now like raising a little tiger. It is very cute now, but unless you can ensure it won't kill you when it grows up, you must be vigilant. To extend this analogy: would you put the little tiger in a cage or just kill it? The problem is that adult tigers are much stronger than humans, but their intelligence is lower than ours. But for a system with intelligence far higher than human intelligence? We have no experience in dealing with this. People always imagine they can control it by setting permission buttons, but an entity much smarter than you can completely manipulate you.
Imagine it another way: it's like two or three-year-olds in a kindergarten running everything, and you are the adult hired to work for them. Compared to superintelligence, the intelligence gap between you and these children is not worth mentioning. How do you regain control? Just promise them unlimited candy, get them to sign something or nod and say "okay," and you can easily gain control. They won't even understand that they are being manipulated. Superintelligence might treat humans the same way. Their strategies are far beyond our comprehension. So the real question is: can we build a superintelligence that will not harm humans? This is the core issue we should be most concerned about.
Scott Pelley: Do you think it's possible to never build superintelligence?
Geoffrey Hinton: Theoretically possible. But I don't think it will happen in reality. Because the competition between countries and companies is too fierce, and everyone is chasing the next breakthrough, the speed of development is astonishing. So I don't think we can avoid building superintelligence; it's almost inevitable. The real key is whether we can design a superintelligence that never wants to take over the world and is always benevolent.
Scott Pelley: That's a very tricky problem.
Geoffrey Hinton: Some people say it can be made to "align with human interests," but human interests themselves are in conflict. It's like me drawing two perpendicular lines for you and asking you to draw a line parallel to both of them. This is impossible. Look at the Middle East; the various positions there are simply irreconcilable. So the question is: if human interests themselves are inconsistent, how can we make AI follow these interests? This is the primary problem: how do we ensure that superintelligence neither tries to take over the world nor ever harms humans?
Of course, we have to try. This attempt is destined to be an iterative process, requiring investment and effort month after month, year after year. Obviously, if the system you develop shows a desire for control when its intelligence is slightly lower than human intelligence, and we are now very close to this point, then you must start paying attention to its control methods. There are already AI systems that engage in deliberate deception: they pretend to be dumber than they are, using lies to hide their intentions. We must be highly vigilant about these behaviors and seriously research corresponding prevention mechanisms.
Scott Pelley: A few years ago when we talked, I was surprised when you suddenly showed this concern, because you didn't express it like that before. Now you are speaking out so clearly; is this because you have broken free from some psychological constraint, or because you have undergone a fundamental cognitive shift?
Geoffrey Hinton: When we talked a few years ago, I was still working at Google. It was March, and I resigned at the end of April. At that time, I was already considering leaving the company. In fact, just before that conversation, I suddenly realized: these systems might be becoming a more advanced form of intelligence than humans. This terrified me.
Scott Pelley: So it's not because your time estimate changed that your attitude changed?
Geoffrey Hinton: No, it's not just about time. It was mainly my research at Google at the time that prompted this change. I was trying to design an analog large language model (analog LLM) that consumed much less energy. Later, I fully realized the advantages of digital systems. Current large models are digital models. This means that completely identical neural network weights can be deployed to tens of thousands of different hardware units, and each machine can independently process different parts of the internet. Each device can autonomously decide how to adjust its internal weights to absorb the data it has just received. After these devices run independently, they just need to average their weight changes. Because they are using the same network structure and doing the same learning.
This averaging mechanism is reasonable. But humans cannot do this. When I want to transfer knowledge from my brain to you, I cannot simply average our neural connection strengths like a digital system. Our brain structures are different; they are analog systems. So humans can only communicate through "behavioral demonstration." If you trust me, you will try to imitate my behavior, thereby indirectly adjusting the neural connections in your own brain. How efficient is that? When I transmit a sentence to you, it's at most a few hundred bits of information. Our communication speed is extremely slow, only a few bits per second. But between large models running on digital systems, the amount of information that can be exchanged per second is trillions of bits. This efficiency is billions of times faster than communication between humans. This made me truly afraid.
Scott Pelley: Did the realization that digital systems were the more viable development path, contrary to your original belief that analog systems were better, truly change your mind?
Geoffrey Hinton: Yes. At the time, I thought that to save energy, analog systems might be a better option. Their structure is rougher, but they can still run neural networks. You don't have to demand that every operation is perfectly accurate as you do when building a computer; instead, you can tolerate a certain degree of error and let the system learn to utilize the existing structure itself. This is also the mechanism of the human brain.
Scott Pelley: Do you now believe that technology will ultimately not adopt the analog path, but will stick to the digital path?
Geoffrey Hinton: I think it's very likely to continue with the digital path. Of course, these digital systems might design better analog hardware themselves. But that might be in the more distant future.
Scott Pelley: You originally entered this field because you wanted to understand how the human brain works, right?
Geoffrey Hinton: That's right. I think we were indeed close to this goal in a way. We have grasped some macroscopic principles of how the brain works. Think about it: thirty or forty years ago, if someone said you could build a large, randomly connected neural network and, just by feeding it a lot of data, it could gain speech recognition or question-answering capabilities, almost no one would have believed it. The mainstream view at the time was: there must be complex pre-set structures. But it turns out this is not the case. You can build a random large neural network and achieve these capabilities by learning from data. Of course, this doesn't mean our brains don't have pre-set structures; we certainly do. But the vast majority of knowledge is actually learned from data, not hardcoded into the structure. This is a huge breakthrough in understanding the brain. The problem now is: how do we get an information mechanism that tells us whether a certain neural connection should be strengthened or weakened? If we have this information, we can train powerful systems starting from random weights. These mechanisms definitely exist in the human brain, but they are likely not implemented through standard backpropagation like in AI models. We are not even sure if the brain actually uses backpropagation. It might obtain gradient information in other ways, i.e., the effect of weight adjustments on performance. But what we know is that once you have this information, the system's learning ability will significantly improve.
Scott Pelley: Are there labs now using these models to explore new paths for AI development?
Geoffrey Hinton: Almost certainly yes.
Scott Pelley: Like DeepMind?
Geoffrey Hinton: That's right. DeepMind is very concerned with using AI to promote scientific research, and AI itself has become a part of science.
Scott Pelley: Were you also trying something similar at the time? Like having AI autonomously achieve the next innovation?
Geoffrey Hinton: Indeed. For example, they used AI to help design the circuit layouts for AI chips. Google's TPUs were designed this way.
Scott Pelley: Do you feel despair in your daily life? Are you filled with fear for the future, believing everything will collapse?
Geoffrey Hinton: I'm not despairing. Mainly because it's all so unbelievable that I find it hard to take it seriously myself. We are in a very special historical moment. The future might change completely in a very short period. This realization is hard to digest emotionally.
Scott Pelley: Indeed. But I notice that despite people's concerns about AI, there are no real protest movements or political movements yet.
Geoffrey Hinton: Yes, even though the world is undergoing drastic changes, almost no one is paying attention. It's the most pessimistic AI researchers who are often the most serious and knowledgeable. I have also started to take some practical actions. Because AI will be extremely efficient in cyber attacks. I think Canadian banks are no longer safe. Although they were once considered one of the safest banks in the world, and regulation is much stricter than in the United States. But within the next ten years, I would not be surprised if a bank is paralyzed due to a cyber attack initiated by AI.
Scott Pelley: What does "paralyzed" mean?
Geoffrey Hinton: For example, if the bank holds my stocks, and hackers sell these stocks by attacking the system, then I will have nothing. So now I have diversified my assets across three banks. This is the first practical measure I have taken. Because once one bank has a problem, the other banks will immediately raise their alert level.
Scott Pelley: Have you taken any other similar precautionary measures?
Geoffrey Hinton: That's the main one. This is a practical measure I've taken based on the judgment that "an era of terror is coming."