One of the Greatest AI Interviews of the Century: AI Safety, Agents, OpenAI, and Other Key Topics

Last night, YouTube's multi-million subscriber channel The Diary Of A CEO released a new in-depth interview with Turing Award and Nobel laureate, known as the "Godfather of AI," Geoffrey Hinton.

The two primarily discussed the development, safety, and application of AI in a conversational format. While AI poses certain risks, it is highly efficient in helping humanity improve the quality of scientific research, work, and daily life. Of course, it will also take away some jobs, thereby creating entirely new positions.

OpenAI, as the initiator and leader of this generation's AI revolution, also featured in the conversation. Hinton also talked about his most accomplished student, OpenAI co-founder and chief scientist Ilya Sutskever, as well as the rapidly developing AI Agents.

Full Interview

This interview is 1 hour and 30 minutes long. To avoid affecting the viewer experience due to its length, "AIGC Open Community" did not add subtitles. However, we have still compiled the content of this interview below for your reference.

Currently, this video has been on YouTube for less than a day, with over 750,000 views and more than 4,500 comments. Users are quite satisfied with this interview. Whether you work in the AI industry or use AI, you can gain significant insights from it.

Let's see how netizens commented on this epic interview.

One netizen stated that this interview should be preserved as a historical artifact.

Image

This is the first or second most important conversation I've seen on your channel. Your pauses in the interview allowed the answers to sink in, which was profoundly meaningful.

Image

This is the best interview I've ever watched; Hinton was very sincere.

Image

My son has had a theory for years that it's dangerous to make AI think like humans. Instead, he hopes we create AI that thinks like a "dog," so it will love and serve us.

Image

Another netizen, who had a profound experience with AI, posted a very long comment: "I was a freelance writer for ten years, but then I lost that profession due to the emergence of AI. The following year, I wanted to transition to the art industry, but that was interrupted again. The year after that, I started training for medical coding, but was interrupted during the certification process when I realized AI was destroying the creative market. I then struggled between various gig jobs, and fortunately, I found a job training AI. Although I hate the job, I'm grateful for the income it provides."

"I haven't been able to find another job; everyone is using AI now. I originally thought transitioning to the tech field would help, so I started learning more about AI and went back to school last year to study programming. However, I quickly realized what was happening."

"AI learns much faster than I do. So I dropped out. I'm not willing to spend money on an education that won't benefit me at all. Even if I completed a programming degree, by the time I graduated, all entry-level positions might already be replaced by AI, and school curricula are far behind the cutting edge of technological development—without experience, no one would hire me."

"Honestly, I don't know what else I can do. I have decades of chronic injuries and can't do physical labor. I think I'll just drift for another year or two until AI can train itself, and then I'll be laid off again. The job market will be even tougher then, because there will be so many other unemployed people. Maybe there will still be some gig opportunities, maybe just enough to cover rent, maybe not even that, who knows?"

"I can foresee myself eventually ending up on the streets, dying in a ditch. Every time I think about the future, I involuntarily fall into deep depression."

"I know AI is indeed very useful and has immense positive potential, but let's be realistic. Globally, we now have enough resources to care for everyone, but we don't. We know what to do about climate change, but we don't do it. The United States is the wealthiest country in the world, yet we don't even provide basic healthcare for our citizens. Universal Basic Income? In the U.S., that's almost impossible."

"AI will not change human nature, and human nature itself is full of fatal flaws: greed, arrogance, selfishness, and so on. The wealthy and powerful will extract everything they can from the system, while the rest will starve. The future is bleak. I've been trying to figure out a way to build a boat that can ride these huge waves, but I can't think of anything. The market only needs a few plumbers."

Image

Below is part of the interview content.

Host: Thank you very much, Geoffrey Hinton. They call you the "Godfather of AI"?

Hinton: Yes.

Host: Why do they call you that?

Hinton: Not many people believed in the past that we could make neural networks, artificial neural networks, work. So since the 1950s, in the field of AI, there have mainly been two approaches to achieving AI. One view is that the core of human intelligence is reasoning, and to reason, you need to use some form of logic.

Therefore, AI must be based on logic. In your brain, there must be something similar to symbolic expressions that you operate with rules, and that's how intelligence works. And abilities like learning or analogical reasoning only emerge after we figure out how basic reasoning works. The other approach is: let's build AI models based on the brain, because obviously the brain gives us intelligence. So, simulating a network of brain cells on a computer and trying to figure out how to learn the strengths of connections between brain cells so that it can learn to do complex things, like recognizing objects in images, recognizing speech, and even reasoning.

I persisted with this approach for 50 years because very few people believed in it. At that time, not many good universities had teams working on this. So, if you were in this field, the best young students who believed in this approach would come and work with you. I was very lucky to attract many excellent students, some of whom later played a crucial role in founding platforms like OpenAI.

Host: Why did you believe that modeling based on the brain was a more effective method?

Hinton: I wasn't the only one who thought so in the early days. Von Neumann believed it, and Turing believed it. If either of them were still alive, I think the history of AI would be very different, but they both died young.

Host: Do you think AI would have emerged earlier?

Hinton: I think if either of them were still alive, the neural network approach would have been accepted sooner. At this stage of your life, what is your mission? My main mission now is to warn people about how dangerous AI can be. When you became the Godfather of AI, did you know that?

Not entirely clear. I was slow to understand some of the risks. Some risks were always obvious, like people using AI to create autonomous lethal weapons, meaning things that roam around and decide who to kill on their own. Other risks, like them one day becoming smarter than us, and possibly even making us irrelevant, I realized much later. Others recognized this 20 years ago, but I only realized a few years ago that this was a real risk that could come very soon.

Host: Considering your knowledge of computers' learning capabilities—that they can learn like humans and are constantly improving—how did you not foresee this? How did you not see it? Neural networks 20 or 30 years ago were very primitive in what they could do; they were far inferior to humans in terms of vision, language, and speech recognition. At the time, it seemed absurd to worry about them becoming smarter than humans. When did this situation change?

Hinton: For the general public, it was when ChatGPT appeared. For me, it was when I realized that the digital intelligence we are creating has certain qualities that make it vastly superior to the biological intelligence we possess. If I want to share information with you, say I've learned something and want to tell you, I'll say some sentences.

This is a fairly simplified model, but generally correct. Your brain tries to figure out how to change the strength of connections between neurons. For example, I might put a certain word in the next position, and when a very unexpected word appears, you learn a lot, whereas when a very obvious word appears, you learn little. For example, if I say "fish and chips," you won't learn much when I say "chips"; but if I say "fish and cucumber," you'll learn more, and you'll wonder why I said "cucumber." This is roughly what happens in your brain, and we believe the brain works this way.

No one is truly sure how the brain works, and no one knows how it obtains information about whether to strengthen or weaken connection strengths, which is key. But we now know from AI that if we can get information about how to adjust connection strengths to better perform tasks, we can learn incredible things, because that's what we're doing now with artificial neural networks. We just don't know how the real brain gets these increase/decrease signals.

Host: Nowadays, what are your main concerns about AI safety? If you were to list the top ones we should focus on.

Hinton: First, I want to distinguish between two completely different risks. One is the risk of humans misusing AI, which accounts for most risks and all short-term risks. The other is the risk of AI becoming superintelligent and deciding it no longer needs us. Is this a real risk? I mainly talk about the second type of risk because many people ask: Is this a real risk? Yes.

Now, we don't know how great this risk is; we've never been in this situation, never had to deal with something smarter than us. So, regarding this existential threat, we have no idea how to address it, nor what it will look like. Anyone who tells you they know what will happen and how to deal with it is talking nonsense. We don't know how to estimate the probability of it replacing us. Some say this probability is less than 1%, while my friend Yann LeCun believes, no, we've been building these things, we'll always control them, we'll make them obedient.

And people like Yudkowsky say, no, these things will definitely wipe us out; as long as someone builds one, it will wipe us all out. He's very confident about this. I think both positions are extreme, and it's hard to estimate the probability in between. If you had to bet on which of your two friends is correct—

I simply don't know. If I had to bet, I'd say the probability is in the middle, but I don't know how to estimate it. I often say there's a 10% to 20% chance they will wipe us out, but that's just intuition, because we're still building them, and we're very creative. Hopefully, if enough smart people research with enough resources, we can find a way to build them so they never want to harm us. Sometimes I think about a second path, about the invention of nuclear and atomic bombs, and how they compare. How is this different? Because when the atomic bomb appeared, I think many people at the time thought our days were numbered.

Atomic bombs are actually only good at one thing, and their mode of operation is very obvious. Even if you haven't seen photos of Hiroshima and Nagasaki, it's clear that it's a very dangerous big bomb. AI, on the other hand, is different; it has many uses. In healthcare, education, and almost any industry that requires data, things can be done better with AI. So we won't stop developing it. People will say: Why don't we stop now? We won't stop because it's too useful in too many areas. Moreover, we won't stop because it's useful for combat robots, and no country that sells weapons will want to stop.

For example, EU regulations: they have some regulations on AI, which are better than nothing, but these regulations are not designed to address most threats. In particular, there's a clause in EU regulations stating that these regulations do not apply to military uses of AI. So governments are willing to regulate companies and individuals, but not themselves. In my opinion, this is crazy. I've thought about this back and forth, but if Europe has regulations and the rest of the world doesn't, it creates a competitive disadvantage.

Host: We've already seen this happen. I don't think people realize that when OpenAI releases new models or software in the US, they can't be released in Europe due to regulations here. So Sam Altman tweeted: Our new AI agent product is open to everyone, but due to regulations, it cannot yet enter Europe. Yes, will this create a productivity disadvantage for us?

Hinton: A productivity disadvantage. What we need now, at this historical moment when we are about to create something smarter than ourselves, is truly a world government run by intelligent, thoughtful people, but we don't have one. So now it's a free-for-all.

Host: The capitalism we practice now has served us well, producing a lot of goods and services for us. But these large companies are legally required to maximize profits as much as possible, which is not what's needed for AI development. So let's talk about the risks. You mentioned human risks, right?

Hinton: I've already distinguished between these two types of risks. Let's first talk about all the risks caused by bad actors and malicious individuals using AI. First is cyberattacks. Between 2023 and 2024, cyberattacks increased by approximately 12,200%, likely because large language models have made phishing attacks much easier. For those who don't know, a phishing attack is when they send you a message saying: "Hi, I'm your friend John, I'm stranded in El Salvador, can you wire some money?"

That's a type of attack. But phishing attacks are actually trying to get your login credentials. Now with AI, they can clone my voice, my image; they can do all of that. I'm having a real headache right now because there are many AI scams on platform X and Meta.

Especially on Meta, such as Instagram and Facebook, there's now a paid advertisement where they extracted my voice and mannerisms from my podcast and created a new video, making me encourage people to participate in this crypto Ponzi scheme or something similar. We spent weeks emailing Meta to take down this ad; they took it down, then another popped up, then they took that down, and another popped up, like a game of whack-a-mole. The most heartbreaking part is that someone fell for the scam, lost £500 or $500, and thought I recommended it, and they're angry at me. I truly feel sorry for them; it's so annoying. I've also encountered similar minor issues; now some people are publishing papers listing me as one of the authors, seemingly to get more citations for themselves.

Host: So cyberattacks are a very real threat and have surged. Clearly, AI is very patient; they can browse a hundred million lines of code looking for known attack methods, which is easy to do. But they will become more creative, and some very knowledgeable people believe that by 2030, they might create new types of cyberattacks that humans have never conceived of. This is very concerning because they can think for themselves and draw new conclusions from far more data than humans have ever seen. Have you taken any measures to protect yourself from cyberattacks?

Hinton: Yes, this is one of the few areas where I've completely changed my practices because I'm afraid of cyberattacks. Canadian banks are very safe; not a single Canadian bank was near collapse in 2008, they are very well regulated. However, I think a cyberattack could paralyze a bank. If all my savings are in bank stocks held by the bank, then if the bank is attacked and they hold your stocks, those stocks are still yours. So I think you should be fine, unless the attacker sells the stocks, because the bank can sell stocks. If the attacker sells your stocks, I think you're finished. I don't know, maybe the bank would try to compensate you, but by then the bank would be bankrupt, right?

So I worry about a Canadian bank being paralyzed by a cyberattack, and the attacker selling the stocks it holds. Therefore, I've spread my and my children's money across three banks. I think if one cyberattack takes down one Canadian bank, the other Canadian banks will quickly become very cautious. Do you have a non-internet-connected phone? Have you considered cold storage or anything similar? I have a small disk drive, and I use this hard drive to back up my laptop, so everything on my laptop is on the hard drive. At least if the entire internet goes down, I know it's still on my laptop, my information is still there.

Host: Then there's election corruption. If you want to use AI to manipulate an election, a very effective method is targeted political advertising, because you know a lot about that person. So anyone who wants to use AI to manipulate an election will try to get all the voter data. Given this, Musk's current approach in the US is somewhat concerning; he insists on accessing all this information that was originally carefully isolated. He claims it's for efficiency, but this is exactly what you would want to do if you wanted to manipulate the next election.

Hinton: Because you get all this data about people, you know how much they earn, where they live, everything about them. Once you know that, it's easy to manipulate them because you can have AI send messages they'll find very persuasive, like telling them not to vote. I have no reason, I just think this based on common sense, but I wouldn't be surprised if part of his motivation for getting all this data from US government sources was to manipulate elections. The other part might be that it's good data for training large models, but he has to get this data from the government and input it into his models. They've already shut down many safety controls and eliminated some organizations that prevented this from happening.

Host: Next, organizations like YouTube and Facebook create information echo chambers, showing people things that will make them angry. People like anger; "anger" here refers to feeling angry but also feeling justified. For example, if you show me a video saying Trump did this crazy thing, I'll click on it immediately.

Hinton: That's what results from the strategies of YouTube, Facebook, and other platforms that decide what content to recommend to you. If they adopted a strategy of recommending balanced content, they wouldn't get as many clicks and couldn't sell as many ads. So essentially, the profit motive drives them to show users things that will make them click, and what makes them click is increasingly extreme content that confirms their existing biases. Your biases are constantly confirmed, growing deeper and deeper, which means your divergence from others widens.

In the United States now, there are two groups that barely communicate. I'm not sure if people realize that every time they open an app, this is happening. But if you use TikTok, YouTube, or other large social networks, as you said, the algorithms are designed to recommend more of what you were interested in last time. So if this continues for 10 years, it will push you towards increasingly extreme ideologies or beliefs, away from rationality and common sense, which is incredible.

Host: I often hear a risk that these things might combine, for example, cyberattacks releasing weapons. Of course, combining these risks creates countless risks. For instance, a superintelligent AI might decide to eliminate humanity, and the most obvious way would be to create a dangerous virus. If you create a highly contagious, highly lethal virus with a long incubation period, everyone would be infected before they even realize what's happening. I think if a superintelligence wanted to eliminate us, it might choose a biological weapon-like method, so it wouldn't affect itself. Don't you think it might quickly make us fight each other? For example, it could send alerts to US nuclear systems.

Hinton: My basic point is that superintelligence has too many ways to wipe us out; there's no need to speculate on specific methods. What we must do is prevent it from having such ideas; that's the direction we should research. If it wants to wipe us out, we simply cannot stop it because it's smarter than us. We are not used to thinking about something smarter than us. If you want to know what life is like when you're no longer the top intelligence, just ask a chicken. When I left home this morning, I thought of my dog Pablo, a French bulldog. He doesn't know where I'm going, what I'm doing, or even how to communicate with him. And the intelligence gap will be like that.

Host: How do you view your life's work?

Hinton: AI will be excellent in areas like healthcare and education, and it will greatly increase call center efficiency, but people will worry about what happens to those currently working in these jobs. I'm sad. I don't feel particularly guilty about developing AI 40 years ago, because back then we had no idea things would progress so quickly; we thought we had plenty of time to worry about these issues.

When you couldn't get AI to do much, you just wanted it to do a little more. You weren't worried that this silly little thing would take over humanity; you just wanted it to do a few more things that humans could do. I didn't do this knowing it might wipe us out; it's just a pity that it doesn't only bring benefits.

So now I feel responsible to talk about these risks. If you could look 30 or 50 years into the future and find that AI led to human extinction, if that's indeed the outcome, I would use that to tell people, to tell their governments, that we must truly strive to control all of this. I think we need people to tell governments that governments must force companies to use resources to research safety, which they are not doing much of now because there's no money in it.

Host: A student you mentioned earlier, Ilya Sutskever, left OpenAI. Why did he leave?

Hinton: Yes, there's been much discussion about why he left, citing safety concerns. I believe he left because of safety concerns. I still have lunch with him from time to time; his parents live in Toronto, and when he comes to Toronto, we have lunch together. He doesn't talk to me about what happened at OpenAI, so I don't have inside information. But I know him well, and he genuinely cares about safety issues, so I think that's why he left, as he was one of the top talents, probably the most important person behind the early development of ChatGPT, such as GPT-2, where he played a significant role.

Host: You know his character, and you know he has a good moral compass, unlike Musk, who doesn't. Does Sam Altman have a good moral compass?

Hinton: I don't know Sam Altman, and I don't want to comment.

Host: If you look at Sam's statements from a few years ago, he casually said in an interview that this thing might kill us all—not verbatim, but close enough. Now he says you don't have to worry too much. I suspect this isn't out of a pursuit of truth, but rather a pursuit of money.

Hinton: I shouldn't say money; it's some combination of both. I have a billionaire friend who is in that circle. One day I went to his house for lunch, and he knows many people in the AI field who are building the world's largest AI companies. He gave me a warning at his kitchen table in London, making me aware of what these people talk about privately, not the things they say about safety in media interviews, but what they personally believe will happen. Their private thoughts are different from what they say publicly.

There's someone—I shouldn't say names—who is the leader of one of the largest AI companies in the world. My friend knows him well, and privately he believes we are heading towards a dystopian world where we have a lot of free time and no longer work, and this person doesn't care at all what harm this might cause the world. The person I'm referring to is building one of the largest AI companies in the world.

Then I watched this person's online interview, trying to figure out which of the three he was—yes, he was one of the three. I watched his interview, recalled my billionaire friend's conversation about him, and thought, "Damn, this guy is lying in public." He wasn't telling the world the truth, which bothered me. This is also partly why I've talked about AI so much on this podcast, because I feel some of them have a bit of a sadistic tendency towards power; they love the idea that they will change the world, that they will fundamentally alter it.

I think Musk is clearly like that; he's a complex person, and I truly don't know how to evaluate him. He's done some very good things, like pushing electric vehicles, which is a great thing; some of his statements about autonomous driving are exaggerated, but what he's doing is useful.

Host: As far as I know, the company once stated it would dedicate a significant portion of its computing resources to safety research, but later reduced those resources. I believe this is one of the things that happened, and it has been publicly reported.

Hinton: Yes. We've talked about the risks of autonomous weapons, and next is the issue of unemployment. In the past, the emergence of new technologies didn't lead to unemployment; instead, it created new jobs. A classic example is the ATM; when ATMs appeared, many bank tellers didn't lose their jobs, they just did more interesting things. But I think this time is more like the appearance of machines during the Industrial Revolution; now you can't earn a living digging ditches because machines dig much better than you.

For ordinary cognitive labor, AI will replace everyone. It might take the form of fewer people using AI assistants, where one person with an AI assistant can now do the work of 10 people previously.

People say AI will create new jobs, so we'll be fine, and it's been the same with other technologies, but this time the technology is different. If AI can do all ordinary human cognitive labor, what new jobs will it create? You would have to be very skilled to have a job AI can't do. I think they are wrong. You can try to extrapolate from other technologies, like computers or ATMs, but I think this time is different.

People say: AI won't replace your job; people who use AI will replace your job. I think that's true, but for many jobs, it means far fewer people are needed. My niece works at a healthcare service responding to complaint letters. It used to take her 25 minutes to read a complaint, think about how to respond, and then write the letter. Now she just scans the complaint into a chatbot, the bot writes the letter, and she only needs to review it and occasionally ask the bot to make revisions. The whole process takes only 5 minutes. This means she can respond to five times as many letters, so the number of people needed is reduced to one-fifth; she can now do the work of five people, meaning fewer people are needed.

Host: In other jobs, like healthcare, there's more elasticity. If doctors can be five times more efficient, we can get five times more healthcare for the same price, which is great. People's demand for healthcare is almost infinite; if there were no cost, they would always want more. In some jobs, with AI assistants, human efficiency will greatly increase without leading to job reductions, because there will be more work to do. But I think most jobs are not like that. Am I correct in my thinking?

Hinton: Yes, exactly. And this AI revolution is replacing intelligence, the brain. So ordinary cognitive labor, like having strong muscles, is now worthless. Muscles have been replaced, and now intelligence is being replaced.

Host: So what's left? Maybe some creativity for a while, but the whole point of superintelligence is that nothing will be left; these things will be better than us in every way. So in such a world, what will we ultimately do?

Hinton: If they work for us, we could get a lot of goods and services with very little effort. That sounds appealing, but I don't know. There's a cautionary tale where human life becomes increasingly comfortable, but the outcome is bad. We need to figure out how to make the outcome good. The good scenario is, imagine a company CEO who's foolish, perhaps the son of a former CEO, and he has a very smart executive assistant.

He says: "I think we should do this." The executive assistant makes everything happen, and the CEO feels great. He doesn't understand that he's not really in control, but in a sense, he is in control; he suggests what the company should do, and she just gets things done, and everything is fine. The bad scenario is when she thinks: "Why do we need him?"

Host: What's the difference between current AI and superintelligence? Because when I use ChatGPT or Gemini, it already feels very smart.

Hinton: Indeed, AI is already stronger than us in many specific domains, such as chess. AI's level is far beyond human capabilities; humans can no longer win, except for occasional flukes, but generally, there's no comparison. The same applies to Go. In terms of knowledge volume, GPT-4 knows thousands of times more than you do. There are only a few areas where you are stronger than it; in almost all other areas, it knows more.

Maybe in terms of interviewing CEOs. You are very experienced and an excellent interviewer. If GPT-4 were to interview a CEO, the results might be worse.

Host: I'll have to think about whether I agree with that. It might not be long before I'm replaced too.

Hinton: Right, perhaps your questioning style and manner could be used to train AI. If a general foundational model were used, not only trained to learn you but also all similar interviewers you could find, especially you, it might become very good at your job, though it might not surpass you in the short term.

Host: So currently there are still a few areas where humans have an advantage, but superintelligence will surpass us in all domains, being much smarter than you in almost everything. You say this might happen in about ten years?

Hinton: Possibly, or even faster. Some people think it will be sooner, but it could also take longer, maybe fifty years. However, it's also possible that training AI with human data might limit it from surpassing humans too much. I guess superintelligence will emerge within 10 to 20 years.

Host: Regarding the unemployment issue, I've been thinking about it, especially after trying to use AI agents. This morning our podcast just released an episode where we debated AI agents with the CEO of a large agent company and others, and I suddenly had an epiphany, seeing what the future might look like.

During the interview, I asked an AI agent to order us drinks, and five minutes later, someone delivered the drinks. I did nothing; I just told it to deliver the drinks to the studio. It didn't even know where we usually ordered from, but it looked it up online, used UberEats, and might have accessed my data. We projected the AI's operations onto the screen in real-time, so everyone could see it browsing the web, selecting drinks, tipping the rider, filling in the address, entering credit card information, and then the drinks arrived. Another time, I used a tool called Replit; I just told the agent what I wanted, and it helped me develop software.

Hinton: That's amazing, and terrifying at the same time. If AI can develop software like that, and use code during training, and even modify its own code, that's even scarier. It can self-modify, while we cannot change our innate abilities; it has no self-imposed limits.

Host: In a world of superintelligence, what would you tell people about career prospects? How should we think?

Hinton: In the short term, AI is still inferior to humans in physical operations, so being a plumber is a good option until humanoid robots become widespread.

Host: You predict mass unemployment, and Sam Altman of OpenAI has also predicted this. Many CEOs, like Musk, I saw an interview where he was asked about this and he was silent for about 12 seconds, then said he was in a state of "suspended disbelief," meaning he simply doesn't think about the issue. When you're advising your children on career paths, with the world changing so fast, what would you tell them is valuable?

Hinton: That's a difficult question to answer. I would say, follow your heart, do what you find interesting or meaningful. To be honest, if you think too much, you'll feel frustrated and lose motivation. I put a lot of effort into starting a company, and then I'd think: "Should I be doing this?" because AI can do all these things. To some extent, I have to deliberately "suspend disbelief" to stay motivated. So I guess I would say, focus on things you find interesting, meaningful, and that contribute to society.

Host: Are there any industries with particularly high unemployment risks due to AI replacement? People often mention creative industries and knowledge-based jobs, such as lawyers and accountants.

Hinton: That's why I said plumbers have a lower risk. Jobs like legal assistants and paralegals will soon no longer be needed.

Host: Will this exacerbate wealth inequality?

Hinton: Yes. In a society with fair distribution, significant increases in productivity should benefit everyone. But if AI replaces many people, those replaced will become poorer, while companies providing AI and using AI will become richer, which will widen the wealth gap. We know that the greater the wealth gap, the worse society becomes; it leads to divided communities, mass incarceration, and widening wealth inequality is not good. The International Monetary Fund has expressed deep concerns that generative AI could lead to large-scale labor disruption and increased inequality, and has called for policies to prevent this.

Host: I saw something on Business Insider; have they proposed specific policy recommendations?

Hinton: No, that's the problem. If AI can make everything more efficient, replacing most jobs, or allowing people to do the work of many with AI, what then? Universal Basic Income is a start; it can keep people from starving, but for many, dignity is tied to work—who you perceive yourself to be is related to what job you do. If we say we'll give you the same money to just sit around, that affects your dignity.

Host: You previously said that AI would surpass human intelligence, and many people think that since AI is in computers, they can just turn it off if they don't want to use it.

Hinton: Let me tell you why AI is superior: it's digital, so it can simulate a neural network on one piece of hardware, and also simulate the exact same neural network on another piece of hardware, creating clones of the same intelligence. You can have one clone browse part of the internet, and another browse a different part, while staying synchronized, keeping their connection strengths consistent. For example, if one clone learns online that it should strengthen a certain connection, it can transmit that information to another, learning based on each other's experiences.

Host: When you say connection strength, you mean the learning process?

Hinton: Yes. Learning is adjusting connection strengths, for example, changing a weight from 2.4 to 2.5—that's a bit of learning. Two identical neural network clones have different experiences, viewing different data, but share learning outcomes by averaging weights. They can average trillions of weights in an instant, whereas humans can only convey information through sentences, one sentence perhaps only 100 bits, and transmitting 10 bits per second is good.

AI transmits information billions of times more efficiently than humans because they are digital, and different hardware can run perfectly identical connection strengths. Humans, on the other hand, are analog; your brain and mine are different. Even if I knew the connection strengths of all your neurons, it would be useless to me because my neurons work and connect slightly differently.

Host: What made you join Google? You worked there for about ten years?

Hinton: Yes. I have a son with a learning disability, and to ensure he would never end up on the streets, I needed a few million dollars, and as an academic, I couldn't earn that much. I tried teaching Coursera courses, hoping to make big money, but there wasn't much. So I thought, the only way to earn a few million was to sell myself to a large company.

Fortunately, at 65, I had two brilliant students who developed AlexNet, a neural network very skilled at recognizing objects in images. Ilya, Alex, and I started a small company, which was auctioned off, and many large companies bid. The company was called DNNResearch. AlexNet won multiple awards for its capabilities, far surpassing its competitors, and eventually Google acquired our technology and other technologies.

Host: How old were you when you joined Google?

Hinton: 65, worked for a full ten years, left at 75.

Host: What did you mainly do at Google?

Hinton: They were very good to me; they said I could do whatever I wanted. I researched knowledge distillation, which worked very well and is now commonly used in the AI field. Distillation is transferring knowledge from a large model to a smaller one. Finally, I became very interested in analog computing and wanted to see if large language models could run on analog hardware to reduce energy consumption. It was while doing this research that I truly began to realize how superior digital technology is for information sharing.

Host: Was there a eureka moment?

Hinton: Over several months, the emergence of ChatGPT. Although Google had similar technology a year earlier, and I had seen it, it greatly impacted me. What came closest to a eureka moment was Google's Palm system being able to explain why a joke was funny; I always considered that a milestone.

If it can explain why a joke is funny, it means it truly understands. Combined with the realization that digital information sharing is far superior to analog, I suddenly started focusing on AI safety, realizing that these things would become much smarter than humans.

Host: Why did you leave Google?

Hinton: Mainly because I wanted to retire at 75, but my retirement life was miserable. The specific timing of my departure was chosen so I could speak freely at an MIT conference, but the underlying reason was that I was getting old, programming was becoming more difficult, and I was making more and more mistakes, which was annoying.

Host: What did you want to say freely at the MIT conference?

Hinton: AI safety. I could have spoken about it at Google too; Google encouraged me to stay and research AI safety, saying I could do whatever I wanted. But working at a large company, I felt it wasn't appropriate to say things that might harm the company, even if I could. I didn't leave because I was dissatisfied with Google; I think Google acts responsibly. They had large chatbots but didn't release them, possibly due to concerns about their reputation; they have a good reputation and don't want to harm it. OpenAI, on the other hand, had no reputation, so they dared to take risks.

Host: If you were to give my audience a concluding remark about AI and AI safety, what would you say?

Hinton: We still have a chance to figure out how to develop AI that won't want to replace us. Because that opportunity exists, we should invest enormous resources into trying to achieve it—if we don't, AI will replace us.

Host: Do you have hope?

Hinton: I just don't know; I'm an agnostic. When you lie in bed at night thinking about the probabilities of various outcomes, you must have a bias inside you, and I think everyone listening has an unspoken prediction about how things will unfold.

I really don't know, I truly don't know; I think it's full of uncertainty. When I feel a bit depressed, I think humanity is doomed, and AI will take over; when I feel optimistic, I think we'll find a way.

Host: On this podcast, we have an ending tradition: the previous guest leaves a question in their diary for you. The question for you is: Given everything you see ahead of us, what do you believe is the greatest threat to human well-being?

Hinton: I believe unemployment is a fairly urgent short-term threat to human well-being. I think if many people lose their jobs, even if they have Universal Basic Income, they won't be happy because they need purpose, they need to strive, they need to feel like they are contributing, that they are useful.

Host: Do you think mass unemployment is more likely to happen than not?

Hinton: Yes, I do. I think the likelihood of this is definitely greater than not. If I worked in a call center, I would be scared.

Host: What's the timeframe for mass unemployment?

Hinton: I think it's already starting to happen. I recently read an article in The Atlantic that said it's already difficult for college graduates to find jobs, partly because the jobs they might have gotten have already been taken by AI. I spoke with the CEO of a well-known large company whose products many people use.

He told me privately that they once had over 7,000 employees, which reduced to 5,000 last year, and now they have 3,600. By the end of summer, due to AI Agents, they will reduce to 3,000 people.

So, layoffs are already happening, yes. His staff has been halved because AI agents can now handle 80% of customer service inquiries and other tasks, so this is already underway.

Main Tag:AI Safety

Sub Tags:Artificial IntelligenceAI AgentsSuperintelligenceJob Displacement


Previous:0% Pass Rate! The Code Myth Debunked! LiveCodeBench Pro Released!

Next:AGI Theory Comparison: Active Inference, Reinforcement Learning, Control Theory, Bayesian Brain, Utility Decision, Bounded Rationality, Emotional Motivation, Dynamic Homeostasis

Share Short URL