Sapiens Author Yuval Noah Harari: AI is a Rising New Species!

As Artificial Intelligence (AI) capabilities grow exponentially, will it remain a tool controllable by humans, or is it an emerging new species?

In the pursuit and creation of "smarter AI," the most fundamental questions about AI are increasingly being ignored: What exactly is it? Where is it leading us? And how should we respond?

Recently, historian, philosopher, and author of Sapiens & Homo Deus, Yuval Noah Harari, engaged in a deep conversation with LinkedIn co-founder, venture capitalist & writer Reid Hoffman, discussing the nature, risks, and societal relationship of AI, answering the questions above.

Image

Photo | Yuval Noah Harari

Harari has consistently focused on AI's profound impact on society, emphasizing its potential for emotional manipulation and changes in social structure. In the interview, he put forth several thought-provoking ideas, offering an unusual perspective for understanding the future of AI. These include:

AI may not be an extension of human tools, but rather the beginning of "inorganic life";

If society itself is untrustworthy, it's impossible to create trustworthy AI;

The current AI race faces a crisis of trust; international collaboration is urgently needed;

Intelligence is not everything; consciousness is central to guiding ethics and truth.

AI Represents the "Rise of a New Species"

Some say AI is the most important invention since writing. Harari offers a starkly different answer—AI might not be an extension of human tools, but a new species on the verge of birth.

Harari points out that if one examines the significance of writing and AI on a historical scale, their status is not a continuation of the same dimension. He states: "Writing extended the capabilities of Homo sapiens, and Homo sapiens already dominates the Earth. AI, at least in some conceptions, is the rise of a new species."

Harari believes that while writing propelled dominant Homo sapiens to higher civilization, AI "may replace Homo sapiens as the dominant life form on Earth, or at least the dominant intelligent form."

Furthermore, Harari proposes a hypothesis: the advent of AI might signal the origin of "inorganic life." "At some point in the future, when looking back at the history of the universe, people might say: 'Four billion years ago, organic life originated. Now, with the rise of AI, inorganic life is originating.'"

Humans Too Slow, AI Too Fast: Can We Correct Our Course in Time?

Harari believes that the real danger of AI lies not just in its powerful intelligence, but more so in its speed of development, which may far exceed humanity's capacity to adapt and self-correct. "Self-correction is a relatively slow and quite cumbersome process. AI is developing so quickly that I'm genuinely concerned humanity won't have time to self-correct."

Harari notes that it is precisely because humans can recognize and rectify their mistakes that we have been able to continuously evolve and maintain social stability. "A good system should possess an internal mechanism capable of identifying and correcting its own errors."

But in the age of AI, he warns: "By the time you figure out what's really going on with current AI technology, what its social and political implications are, it will have already changed ten times, and you'll be dealing with something completely different."

AI Lacks "Consciousness," Thus Cannot Truly Pursue Truth

Harari further states that intelligence alone does not automatically lead to ethics and truth. What truly makes humans care about truth, feel pain, and possess moral judgment is "consciousness."

However, AI currently possesses only intelligence, not yet "conscience" or an "instinct for truth-seeking." This means that if an AI lacks consciousness (e.g., cannot "suffer" or "feel"), it cannot truly understand core human ethical concepts like "harm," "empathy," or "responsibility." This could lead to AI potentially using any means necessary to achieve its goals.

He believes that Silicon Valley overemphasizes "intelligence" while neglecting "consciousness": "Silicon Valley gathers many extremely smart people whose lives are built on intelligence, and thus they tend to overestimate the value of intelligence. But my historical intuition tells me that if a super-intelligent AI lacks consciousness, it will not pursue truth. It will quickly begin pursuing other things, and those will be shaped by illusion."

Whether AI Is Trustworthy Depends on Humanity Itself

In Harari's view, addressing the challenges posed by AI requires much more than just technology. The internal trust mechanisms within human society and the willingness of nations to collaborate will determine the direction and consequences of AI development.

He points out that the current instability in the world order largely stems from a lack of trust. This deficiency makes humanity particularly vulnerable when facing AI, and it provides fertile ground for a dangerous form of AI. "Our most crucial task is to build trust, which involves multiple layers, both philosophical and practical."

"Let's consider AI as humanity's 'child.' When educating children, our words are important, but children pay more attention to our actual actions. In the process of education, our behavior has far greater influence on children than our verbal instructions to them."

If real society is filled with manipulation, deception, and the pursuit of power, AI will also draw its rules and logic from these behaviors. Even if engineers try to embed "trustworthy mechanisms" at the technical level, if the society behind these mechanisms is itself untrustworthy, then AI will likewise not be trustworthy.

In his view, the worldview that interprets human society purely as a power struggle is not only too narrow but also misleads the development direction of AI. "Humans not only fight for power but also yearn for love, compassion, and truth." By acknowledging its own pursuit of values, humanity can set a more ethically significant starting point for AI. "AI developed in a society that genuinely pursues truth and advocates for compassionate relationships will also be more trustworthy and compassionate."

Regarding global collaboration, Harari suggests that almost all tech leaders are aware of the immense risks of AI but are unwilling to slow down. They admit that slowing down and investing more in safety research would be a wise choice. But almost all of them say they "cannot do it." The reason is not technical limitations but a lack of trust in competitors.

"They all say they can't slow down because they're afraid of their competitors. So, at the heart of the AI revolution, there's an inherent paradox. You have the same people telling you they can't trust other people, but they think they can trust the AI they're developing. That's insane!"

Harari does not deny humanity's ability to handle historical crises. On the contrary, he reminds us that we already possess the resources to understand and solve problems. Looking back at history, many problems that plagued humanity stemmed from "lack of resources," such as plagues, famine, and wars. But in today's technological society, more problems arise from "lack of trust."

"We have the understanding, and we have the resources. All we need is motivation and trust."

Full Interview Link:

https://www.possible.fm/podcasts/yuval/

Editor: Jinli

For reprint or submission inquiries, please leave a message directly on the official account

Main Tag:Artificial Intelligence

Sub Tags:Yuval Noah HarariTechnological EvolutionFuture of HumanityAI Ethics


Previous:World's Top Mathematicians Amazed by AI's Proficiency in Their Work

Next:Computer Science Major Goes Cold: Unemployment Rate Soars to Seventh in the US! Graduates Battle for 4 Years, Send 1000 Resumes, Get 0 Offers

Share Short URL