Yuval Noah Harari, Author of 'Sapiens': The Biggest Danger in the World Today is Humans Not Trusting Each Other, Yet Trusting AI That Evolves Millions of Times Faster Than Carbon-Based Life; We Reject Truth Because It is Expensive, Complex, and Painful

This week, the official channel of Yuval Noah Harari, author of 'Sapiens', released a lecture given at Keio University in Tokyo in the spring. The host was Kohei Itoh, President of Keio University. Checking the timing, I think the event was organized to promote the Japanese edition of his new book, 'Nexus'.

图片

Yuval Noah Harari needs no further introduction. He is an Israeli historian who has written a series of bestsellers including 'Sapiens', 'Homo Deus', and '21 Lessons for the 21st Century'. In '21 Lessons for the 21st Century', published six years ago, he identified three major global threats: nuclear war, uncontrolled development of biotechnology (including gene editing affecting humans), and the runaway information technology (IT) network and artificial intelligence. Talking about the threat of AI in 2019 seemed a bit alarmist, but looking back in the era of large models, it feels somewhat prescient.

In the lecture, he argued that among the three threats from back then, the nuclear and biological threats now seem less risky, but the AI risk has become much higher priority. Whether he is right or not, Harari is a man of principle; according to public information, as of May 2021, he had never used a smartphone. As we mentioned before, there are more people talking about AI opportunities and fewer talking about AI risks. So far, the only content we have published on this topic is from Hinton and Yoshua Bengio.

I. Why AI Has Become the Highest Priority Issue

图片

Kohei Itoh began by asking: Why did Harari focus primarily on information networks and AI in his new book, 'Nexus', and what was the consideration for adjusting the priority compared to the equally disruptive biotechnology and persistent threat of nuclear war?  

Harari's answer was very clear. He first compared AI and biotechnology. While both can bring dramatic changes to the world, the speed of AI development far exceeds that of biotechnology. The revolutionary cycle of biotechnology is long because "biology itself is much slower". He gave an example: modifying the human genome, such as for an infant, and wanting to observe its effects and make future changes accordingly, might require waiting twenty or thirty, even forty years, to evaluate the specific effects of the new genetic material on human behavior, intelligence, and psychology. Therefore, in the field of biotechnology, especially when it involves humans, the "generational" cycle might be 20 to 30 years. In contrast, the iteration speed of AI is astonishingly fast, "perhaps just a few days". He pointed out: "Digital evolution is millions of times faster than organic evolution." This determines that, although the potential dangers and opportunities of biotechnology are not to be underestimated, AI appears more urgent due to its rapid development trajectory.  

Next, Harari compared AI with the nuclear threat. He believes that the reason for focusing more on AI is mainly twofold. Firstly, nuclear war has no positive meaning, so it has no supporters. Since 1945, the biggest taboo in the international system has been that strong nations cannot simply invade, try to conquer, and destroy weak nations just because they are strong. This taboo has now been broken, but at least everyone understands its danger, "there are no winners in a nuclear war".  

AI, on the other hand, is a more complex challenge because it possesses enormous positive potential, which makes it difficult for people to fully grasp its potential threats. Harari mentioned that since the atomic bombings of Hiroshima and Nagasaki, the danger of nuclear technology has been clearly visible to humanity, requiring no further explanation. But the danger of AI is difficult to grasp because it is an "alien threat".

He emphasized that the core problem with AI is not that it is evil - "it is not evil" - but its "alienness". It is an intelligence that may surpass human intelligence in many ways, but it is neither human intelligence nor organic intelligence. Many people ask when AI will reach human-level intelligence, and Harari's answer is "never", because it is completely different from us. He cleverly used the analogy: "It's like asking when an airplane will reach the flight level of a bird. The answer is never. Airplanes and birds are completely different; they fly in different ways." Therefore, AI will surpass human intelligence, but its essence will be alien, which makes predicting its development trajectory and consequences extremely difficult.  

Furthermore, Harari put forward a core point about AI that most needs to be understood by the public: "It is not a tool, it is an agent." This is the first technology created by humans in history that acts as an agent rather than just a tool. Whether it's atomic bombs or peaceful nuclear reactors used for power generation, they are tools in human hands, and humans decide how to use them - whether for generating electricity or destroying cities, the decision-making power lies with humans. The reactors themselves cannot make any decisions, let alone invent the next generation of reactors or bombs.

Until now, only humans have been able to make decisions and create new ideas. AI, on the other hand, is the first technology that can do these things without human help or intervention. AI can make decisions on its own. AI weapons do not need human commands to decide their targets and can even invent new weapons and military strategies. It has both enormous positive potential, such as inventing new medicines and treating cancer, and enormous negative potential. Precisely because of this, Harari chose to focus on the history of information technology in his new book, because he believes that at this moment, this is the biggest challenge we face, and in many ways, it is even the "biggest challenge humanity has ever faced". For thousands of years, humans have dominated the Earth, and nothing on the planet could compete with us in terms of intelligence and innovation. But now, something has emerged on Earth that may surpass us in intelligence within a few years, that can determine our lives, and that can invent everything from medicines to weapons. This is the biggest challenge.  

II. Cognitive Evolution Over Ten Years: From Long-Term Speculation to Imminent Reality

Itoh noted that Professor Harari had already issued warnings and focused on information technology and networks as early as 2015 in his book 'Homo Deus'. He curiously inquired about the evolution of Professor Harari's thinking compared to his thoughts when writing 'Homo Deus' a decade ago. What has been his personal intellectual trajectory over these ten years, shifting from a historian and medieval military expert to focusing on information networks and AI?  

Harari admitted that when he wrote 'Homo Deus' ten years ago, very few people were talking about AI. Of course, insiders in computer science, leading companies like Google and Microsoft had foreseen some trends, and there were a few philosophers like Nick Bostrom (whose book 'Superintelligence' was published in 2014) who were discussing it. But at that time, writing about AI felt more like discussing "something that might happen hundreds of years from now, some philosophical speculation with very little direct practical impact on our lives".  

However, today, AI is everywhere, and its development speed is simply "shocking". Harari recalled that back then, there were figures like Ray Kurzweil who predicted that by the late 2020s, around 2029, AGI (Artificial General Intelligence) would emerge, capable of surpassing humans in all areas, not just in narrow fields like playing chess or driving cars. Even Harari himself, when reading Kurzweil's predictions, thought, "Oh, he's exaggerating, it can't be 2029." But now, Kurzweil still stands by the 2029 prediction, and he is even considered "one of the more conservative thinkers". Listen to what those leading AI research at large companies in China and the US are saying; they are talking about achieving AGI within one, five years. The pace of AI development has significantly accelerated.  

Another significant change is that all hopes for regulating AI, for reaching some kind of agreement on this, now seem "extremely naive". Harari believes that, especially after the most recent US election, "the hope of reaching a global agreement on how to manage the risks of AI development has basically evaporated".  

This leads to a huge paradox at the core of the AI revolution - the paradox of trust. When Harari talks to the people leading the AI revolution, such as the heads of companies like OpenAI, Microsoft, Tencent, Baidu, and major politicians, he always asks them two questions. The first question is: "Why are you developing so fast?" Harari says he understands the enormous positive potential of AI, but also sees the risks, so he suggests, "Let's slow down a bit and give human society some time. Humans are highly adaptable beings; we can adapt to the AI era, but we need time. Please give us a little more time."  

And the answers from these leaders are almost identical: "We understand there are huge risks, even leading to human extinction. We understand this. We are willing to slow down, but we cannot. Because if we slow down, and our competitors in other companies, other countries don't slow down, they will win this race, and then the most ruthless people will dominate the world because they will have this wonderful technology, AI." The conclusion is: "Because we cannot trust other humans, we must develop faster."  

Then, Harari asks the second question: "Do you think you can trust the superintelligent AI you are developing?" Their answer is: "Yes." Harari finds this incredible: "The people who told me a minute ago they couldn't trust other humans suddenly become very easy to trust, and they say but we can trust AI." He says frankly that this is simply "on the verge of madness". Because as for humans, he understands why it's difficult to trust each other, but after all, we have thousands of years of experience dealing with other humans, understanding their psychology and motivations. We know well the human desire for power, but we also understand the mechanisms that constrain the pursuit of power.

For thousands of years, humans have developed ways to learn mutual trust. From living in small tribes of dozens of people ten thousand or a hundred thousand years ago, unable to trust anyone outside the tribe, to today, nations like Japan have hundreds of millions of citizens who can trust each other. Global trade networks connect all 8 billion people on Earth, and the food we eat, the clothes we wear, the medicines that protect us are often produced by strangers on the other side of the world. So, although trusting others is still a huge problem, we have experience in this area.  

However, when it comes to superintelligent AI, we have "no experience". We don't know what kind of goals and tricks they might develop. We don't know what will happen when millions of superintelligent AIs interact with each other and with millions of humans. AI should not be seen as a large computer in one place, but rather as "a global wave of immigration". He believes: "AI is a wave of millions of alien immigrants who will take away people's jobs, who will have completely different ideas about how society should be managed, and who may take over countries. This should make people more afraid."  

Therefore, this is the great paradox. The core question is: "How can we build trust among humans before we develop superintelligent AI?"  

III. The Trust Gap Between Elites and the Public: Whose Elite, Serving Whom?

Speaking of trust, Itoh further elaborated that it is not just about trust between humans and AI agents; the trust gap between the so-called "elites" and "ordinary people" is also widening. He is concerned about the extent to which Professor Harari's books, which are perhaps read more by the "elite class", can influence the general public. After all, bridging the gap between these two divided groups is crucial.  

图片

Harari responded that, like many divisions we invent in our minds, this division between "elites" and "the people" is also a "false dichotomy". He gives the example of the world's richest people, such as Elon Musk, and other billionaires, who are themselves billionaires but claim to be against the elites. Harari retorts: "If you are not an elite, then what is an elite? Are not the richest people in the world elites? All these billionaires who are now in charge of the US government, oh, they are not elites. Why?" He believes that the word "elite" has become a negative label that people simply use to attack opponents.  

The truth is, "every group has its elites". Harari emphasizes that managing any group, even a football club, requires elites. There will always be some people in society who are more talented in certain areas, or who have more power and influence; this is the essence of human society. The question is not about the existence of elites, but rather: "Is this a serving elite or a self-serving elite?"  

He explained that in a group, whether as small as a circle of friends, or as large as an entire country or even the world, if those with the most influence use their power only for their own benefit, it will be a terrible situation. The ideal situation is that those with more talent, influence, or power in specific areas use their abilities and power not just for personal gain, but for the well-being of everyone. This is the goal that should be pursued.  

IV. The Educational Mission in the Age of Information Deluge: From Knowledge Transmission to Discernment of Truth

The conversation naturally extended to the role of education in addressing the challenges of the AI era. Itoh mentioned the division Harari discussed in his book between people living in cyberspace and physical space, and the phenomenon of more and more people immersing themselves in cyberspace, getting used to receiving fragmented information (like tweets under 280 characters).

He worriedly asked how higher education institutions like Keio, as well as primary and secondary education, should respond to this challenge to ensure that younger generations still have the ability to read long-form content like Harari's books, which are thought-provoking and stimulate discussion. In this era where information is increasingly accessible but deep thinking is challenged, what role does education play in maintaining human intellectual standards?  

Harari pointed out that in previous eras, the core task of educational institutions was to provide people with information because information was scarce. He described scenarios where people living in a remote small town might only have a library with a hundred books, making it very difficult to obtain even one book. Now, the situation is exactly the opposite; people are not lacking information but are overwhelmed by a flood of information. The most crucial point is to understand: "Information is not truth."  

"Truth is a very rare subset of information," Harari emphasized. If you look at all the information in the world, the proportion of truth is tiny. Most information is "garbage information, fiction, fantasy, lies, propaganda, etc." The reasons are not hard to understand. Firstly, "truth is expensive, while fiction is cheap." Writing a factual account requires research, spending time analyzing and verifying facts, which takes time, effort, and money. Fiction, on the other hand, is very cheap; you just write down whatever comes into your head.   

Secondly, "truth is often complex because reality is complex." For example, writing an essay on quantum physics is inherently complex. And people generally dislike complex narratives; they prefer simple stories. Fiction can be made as simple as needed.   

Finally, "truth is often painful." Not always, but some parts of reality are indeed uncomfortable. On a personal level, this is why people need psychotherapy - if everything about oneself was so pleasant and interesting, there would be no need for a therapist to help us acknowledge painful memories or internal painful patterns. This also applies to entire countries; countries often do not want to admit certain painful parts of their history or society. Therefore, truth is sometimes painful, while fiction can be shaped to be as beautiful and pleasing as desired.   

In this competition between "expensive, complex, and painful truth" and "cheap, simple, and enticing fiction", fiction often wins, leading to a world flooded with fictional content. Therefore, the task of institutions like universities and newspapers is no longer just to provide people with more information - they already have too much information - but to "provide them with a method to find those rare gems of truth in the ocean of information, and to know how to distinguish between the two."  

As a historian, the main thing Harari teaches his students in history class is not specific historical facts about the past, such as in which year a certain king defeated another king. Students can easily obtain this information through online searches. What they really need to learn in the history department is: "How to distinguish between reliable and unreliable historical sources." One source might be something I read on the internet today - how do I know if I can trust this TikTok video? A source might also come from the Middle Ages, such as a thousand-year-old document recording a king claiming to have won a battle - how do you decide whether to believe it? Because people a thousand years ago also lied, not just people today. This is the core issue.   

Of course, this question will be slightly different in each scientific discipline, but the key is not about getting more information, but about "how do we distinguish between reliable and unreliable information?" The same applies to physics or chemistry experiments: how do I know if the experiment was conducted correctly and the results are credible, or if there were problems and the results should not be accepted?  

V. The Urgency of Rebuilding Trust: Racing Against Time Before AI Matures

Itoh mentioned that reading Harari's new book (Nexus), while bringing complex emotional experiences, even some pain, also gave him hope - that things are not yet beyond repair. Humans have the ability to self-correct and cooperate, and if we have enough time before AI reaches a certain critical point, perhaps we can find a way to cope. It was based on this consideration that Keio University established the "Cross Dignity Center", aimed at bringing together scholars from humanities and social sciences to jointly address this challenge. He then posed an intriguing hypothetical question to Harari: "If you were a researcher at our Cross Dignity Center, what would be your research topic?" and warmly invited Harari to join at any time.  

Harari's answer was direct: "How to build trust among humans." He believes this is the "most urgent" topic, because the world is currently witnessing the erosion of human trust, and this is happening precisely when we need trust the most. He emphasized: "The biggest danger in the world today is still not AI, but the lack of trust among humans." If humans can trust each other and cooperate, we can still design ways to develop AI safely. It is the lack of trust among humans that makes the AI revolution so dangerous. Therefore, he would focus on this and believes that "it is not too late to build enough human mutual trust to control AI."  

So, how exactly should we go about addressing this challenge? Harari believes that it requires a multidisciplinary effort, which is why an interdisciplinary center is an ideal place for research, because no single discipline can solve this problem alone. We need perspectives from biology and psychology, because we are still animals and need to understand our animal heritage - for example, we can learn a lot about human behavior from how chimpanzees build or fail to build trust. Of course, economics and computer science also need to be involved, because all current social systems are built on new information technologies. Many methods for building human trust that were very successful in the past decade or two may now be outdated because humans now communicate through computers.  

There is much discussion about the reasons for the collapse of human trust over the past one or two decades. This is itself a paradox, because based on a lot of hard data, such as child mortality, disease, and famine, human society is in the best state it has ever been. We are in an unprecedented optimal situation, yet people are so angry, frustrated, and unable to communicate with each other. Harari believes the reason is that "a new technology has intervened between humans". Now, most human communication is mediated by a non-human agent (referring to information technology, social media algorithms, etc.), which is causing chaos in the world. To rebuild trust, we cannot go back in time and simply say, "Let's throw away all smartphones and computers"; that won't work. Therefore, we need to "relearn how to build trust among the general public under the mediation of information technology."  

VI. Bridging Academia and the Public: Clear and Understandable Expression and Building Social Influence

Itoh highly praised the readability of Harari's books, noting that his style is very different from typical academic writing, and asked if this was intentional.  

Harari stated clearly that it was "absolutely" intentional. He sees his work as "building a bridge" between the world of academic research and the public. Academia usually involves experts communicating with each other in language that ordinary people find difficult to understand. What he strives to do is to convey and explain the latest theories, models, and findings from the academic and scientific communities in a way that can be easily understood by even high school students or any ordinary person who wants to understand the AI revolution and the history of information. He envisions his readers as people who do not have a computer science background and do not want to understand various charts and equations, but hope to understand the AI revolution and information history in simple language. This is the goal of writing this book.  

As for the effectiveness of these efforts in cyberspace, among those who "live in cyberspace", Harari admitted that it is "very difficult to know". Writing a book and putting it out into the world, it is hard to know exactly what impact it will have on people. However, he mentioned that in addition to his personal efforts, the dissemination of his books also benefits from a social impact company. About ten years ago, at the same time as 'Homo Deus' was published, he and his husband co-founded this company. Harari is responsible for writing the books, and his husband is responsible for all business operations. They have a team of 20 people, one of whom was in Tokyo at the time. They work to transform the ideas in the books into more media formats through social media accounts and other channels to reach a wider audience.  

"Whether this is enough, we don't know," Harari said humbly. "We feel that everyone has a small agency in the world. We do our best and trust that others will also do their best." He believes that no one can carry the weight of the entire world alone; if enough people do their best in their respective small areas, "I think we will be fine."  

VII. The Collapse of International Order and the Shadow of the AI Race: Reflections on the Trump Era

Given that the Japanese edition of Harari's new book, 'Nexus', has just been released, Itoh also took the opportunity to ask Harari for his views on the development of the global situation since the change in US leadership.  

Harari was filled with worry. He pointed out that due to the accelerating erosion of trust within societies and between societies, we are seeing the international order established after World War II collapsing. The most important taboo and rule of this international order was that strong nations should not arbitrarily invade and conquer weak nations. This was in stark contrast to the previous thousands of years of human history - during which it was commonplace for strong nations to invade and conquer their neighbors to establish larger states or empires. In the early 21st century, humanity enjoyed the most peaceful and prosperous era in history, largely because this taboo was maintained.

This is most clearly reflected in government budgets. Throughout history, the vast majority of kings, shoguns, and emperors spent over 50% of their budgets on the military: soldiers, warships, castles, etc. were the main government expenditures. In the early 21st century, the global average level of government military spending dropped to about 6% or 7%, which was an astonishing achievement. About 10% of the budget was spent on healthcare, which was the first time in human history that governments worldwide invested more in healthcare than in the military.  

However, this trend is now reversing. Now, if the weak refuse to obey the strong, the blame for the resulting conflict is placed on the weak. This logic is evident everywhere, for example, the US once threatened to "conquer" Greenland.

Harari warned that this worldview and logic will drag all of humanity back into a state of perpetual war. In such a situation, not only will healthcare and education budgets decrease, but any opportunity to regulate AI will also cease to exist. Because every country will say, "We must win the AI race. We will not do anything that might slow us down, lest the other side wins, and then they become strong, and we must obey them." Despite the bleak outlook, Harari still conveys hope in his book: "As long as we haven't reached that point, there is still hope. If enough people do the right thing in the coming months and years, humanity will still be alright."  

VIII. Stories, Algorithms, and the Future of Trust: Safeguarding Human Values in the Digital Deluge

图片

During the Q&A session, a Japanese student interested in the entertainment industry, particularly in animation and CG technology, asked Harari about how AI will change entertainment content creation. As AI generation technologies for images and videos mature, what specific changes will occur in the form and content of the future entertainment industry?

Harari frankly admitted that we cannot predict exactly what will happen, but it is very clear that "more and more forms and content of entertainment will come from AI, not from humans." Historically, all forms of entertainment, whether poetry, drama, television, or film, originated from human minds and imagination. Now, something has emerged on Earth that can produce such content at a much lower cost and much higher efficiency than humans, and can "break through the limitations of human imagination."  

Because human imagination is bound by the limitations of the human brain and our own biology, while AI is not. He used the example of Go. The famous match between AlphaGo and world champion Lee Sedol in 2016 was one of the defining moments of the AI revolution. But what was amazing about that match was not just that AlphaGo defeated the human world champion, but the way it won - it adopted "completely alien strategies". Humans have been playing Go in East Asia for thousands of years, millions of people have played Go, and Go is even considered an art form with an entire philosophical system developed around it. However, over such a long period, with so many people, no one ever thought of playing Go like AlphaGo did.  

Harari compared this to a planet called "Planets Go", where there are various ways of playing Go. Humans have been trapped on an island on "Planets Go", thinking that this was the entire planet. For two thousand years, they thought this was the only way to play Go, and due to the limitations of their thinking, they couldn't leave this "mental island". Then AlphaGo appeared and demonstrated completely new ways of playing, helping humans discover completely new areas on "Planets Go".  

The same thing is likely to happen in fields like music, painting, drama, and television, and it has already begun. Many people are now trying to create videos using AI, and this is just the beginning of the process. If we think today's AI is complex, it is nothing compared to AI ten years from now. Therefore, like other fields, the entertainment industry will be completely transformed, and we cannot predict exactly how, because the essence of AI is that it is an "alien form of intelligence", and human intelligence cannot predict the behavior of alien intelligence.

If you can predict everything an AI is going to do, then it's not AI, it's just an automatic machine. Like a coffee machine, if you can predict that pressing a button will make an espresso, then it's an automatic machine, not AI. So, when is it considered AI? "When you walk up to the machine, and even without pressing any buttons, the machine tells you: 'Hey, I invented a new drink this morning, and I think you'll like it better than coffee, and I've already made one for you.'" That is AI.   

XIII. Trust is the Foundation of Life, and the Ultimate Solution in the AI Era

At the end of the event, Harari summarized the day's discussion and Q&A. He frankly admitted that many of the issues discussed sound worrying, partly because companies developing AI often only emphasize its positive potential, so philosophers, historians, and critics are needed to point out the potential dangers for balance. However, AI undoubtedly also has enormous positive potential; otherwise, no one would develop this technology. Its applications range from medicine to preventing catastrophic climate change, and the potential is real. The key issue is not how to stop the development of AI - which is neither realistic nor desirable - we want to develop it, but we want to do so safely.  

And the key to achieving this goal is, once again, building trust among humans. Harari encouraged everyone to do their best in their specific areas. He understands that people can sometimes feel overwhelmed by the weight of the world, but reminded everyone not to feel that the entire world rests on their shoulders alone. There are over 8 billion other people on Earth sharing this weight, not to mention animals, plants, and other organisms, each doing their small part.  

Speaking of the core theme of this conversation - trust - Harari shared a thought of his own: "Trust is indeed the foundation of life. If you don't trust others, if you don't trust the outside world, you can't live for a minute." Even the simple act of breathing, which sustains life, is essentially "trust in something outside ourselves". He observed that many people in the world today, including some of the most powerful, are obsessed with separation, building barriers and walls. But in fact, "complete isolation is death." Every moment, we make the simple gesture of trusting the outside world: opening our mouth or nose, inhaling something from the outside, we trust it, and then exhale it. It is this inhalation and exhalation of trust that sustains life. If you don't trust the outside world, and block your mouth and nose, you will die within a minute or two.   

"Yes, I know there are many reasons to feel anxious and fearful; these are real and not imaginary. There are huge problems in the world," Harari concluded. "But ultimately, the foundation of life is trust." 

Main Tag:AI and Society

Sub Tags:TrustInternational RelationsEducationInformation Age


Previous:DeepSeek Accuracy and Efficiency Doubled, Huawei & CAS Propose Chain-of-Thought "Early Exit" Mechanism

Next:The Fourth Dimension: Time, Space, or Consciousness?

Share Short URL