Interview with Turing Award Laureate Joseph Sifakis: AI Can Evolve Smarter, But Cannot Fully Surpass Humans

Click on "Turing AI" above and select "Star" official account

The AI dry goods you want to know will be delivered to you as soon as possible

Image

Copyright Notice

Copyright belongs to the original author, used for academic sharing, please leave a message to delete if infringing

Image

Today, Artificial Intelligence (AI) is being shaped into a disruptive revolution, seemingly heralding a "singularity" where machines will eventually surpass human intelligence. However, a fundamental question is overlooked: do we truly understand AI's capabilities and limitations? In today's information explosion, the boundaries between knowledge and data are increasingly blurred, and AI systems, while generating answers, are also creating confusion. Joseph Sifakis, author of "Understanding and Changing the World: From Information to Knowledge and Intelligence," foreign academician of the Chinese Academy of Sciences, and Turing Award laureate, points out that the current AI craze exposes society's deep misunderstanding of knowledge—we confuse the accumulation of information with the creation of wisdom, overestimating the "intelligence" of machines, while underestimating humanity's unique social nature and moral responsibility.

Currently, the "myth" of AI is spreading globally. Conversational AIs like ChatGPT and Deepseek can fluently answer complex questions, autonomous driving technology is envisioned to "liberate humanity," and medical AI is expected to solve diagnostic challenges. However, Joseph states: we don't actually have truly intelligent systems today. He believes that the real impact of AI on industry today is almost zero. For example, the practice of automated industrial products like Xiaomi cars, where accidents occur, points to risks and standards issues in AI deployment.

Furthermore, Joseph mentions that even if AI can perform perfectly in terms of safety, it still faces insurmountable challenges: AI's "intelligence" is essentially a product of statistical models; it lacks common sense understanding of the world and cannot, like humans, weigh values and risks in complex social situations. When an autonomous vehicle can perfectly avoid collisions, it will still struggle to understand the social contract behind "yielding to an ambulance"; AI can predict earthquakes, but its conclusions are like a black box, and scientists cannot trace the logic. These limitations are not just technical problems; they are reflections on humanity's own cognitive abilities—do we prioritize "efficiency above all" over the pursuit of reliability and responsibility?

A deeper crisis lies in the reshaping of education and social values. When modern students rely on AI tools to complete homework, when young people's excessive materialism reduces career choices to "high-paying or not," when philosophy and humanities are considered "useless," people are ceding the right to think to machines. Joseph warns that the core of education is not to transmit knowledge, but to cultivate critical thinking and creativity; the meaning of happiness is not material satisfaction, but the freedom to "strive for dreams." The convenience of AI, if unchecked, may exacerbate social utilitarianism and erode human subjectivity in moral decision-making, conflict resolution, and cultural inheritance.

At the same time, the impact of AI on traditional social structures cannot be ignored, such as how automation systems may threaten current and future job markets. However, Joseph is not a technological pessimist. He calls for establishing global standards for AI applications, clarifying the boundaries of responsibility for manufacturers and users, and balancing STEM with humanistic literacy in education. Only then can AI evolve from a tool that "replaces humans" to a partner that "enhances humans."

In this dialogue, Joseph's views undoubtedly challenge the prevailing narrative about AI. He provides professional readers with deep insights into the current state and future trends of artificial intelligence, and popularizes basic knowledge of informatics and artificial intelligence for general readers, helping us think about how humans and machines should collaborate and coexist more meaningfully in this intelligent era.

The following is the full text of the Edu Guide's dialogue with Joseph (with omissions), enjoy:

Produced by: Edu Guide

Interview by: He Peikuan

Authors: Yang Dingyi Luo Bowen

1、

"We live in an era confused about knowledge"

Edu Guide: You mentioned that one responsibility of your book "Understanding and Changing the World" is to demonstrate the importance of knowledge. Do you think we live in a society that undervalues knowledge?

Joseph: I think we live in a society that is confused about knowledge. We have a lot of information, and I think clarifying the concepts of information and knowledge is very important. Information is a fundamental concept in mathematics, computer science, and philosophy, referring to data that has meaning and can be used to make decisions. Data can be sensory language data, text, images, etc. This data is interpreted by humans. So to create information, you need humans to interpret data. Information itself is independent of matter and energy. This is also very important for understanding information.

Knowledge is useful information that we can use to solve problems. So not all information is knowledge. In general, humans deal with two types of problems. One is understanding the world, as I said in my book, and the other is achieving their goals through action.

Finally, we need to have some definitions of intelligence, which describes our ability to use and apply knowledge to understand the world and act according to goals. Therefore, it specifically means the ability to perceive the world, that is, to understand what is happening in our environment and to predict its future state. So, in answer to your question, I think people don't have a clear understanding of the distinction between information and knowledge.

Edu Guide: In our society, there is true and untrue information, useful and useless information. How do we distinguish them when we encounter these different types of information?

Joseph: I think understanding this is also very important. This relates to the key characteristic attributes of what we call knowledge. Among them, a key feature is the usefulness of information. Some knowledge may be true, but not very useful or relevant. For example, a very abstract mathematical knowledge might be useless to many people.

On the other hand, we can have a lot of useful knowledge that is not accurate, so it is untrue. For example, all metaphors we use in science are like this. We say electrons, electrons are a wave, or a particle. But in fact, this is a metaphor. In some cases, even simpler beliefs can be useful knowledge. For example, primitive societies believed in myths (fictitious things). Of course, this is not true knowledge, but it was very important for promoting social cohesion and peace.

Edu Guide: From an informatics perspective, do you think we now have some "myths" that present false information or false knowledge to ordinary people?

Joseph: In AI systems, there is false knowledge. We know they can systematically generate false information and knowledge because we have what is called hallucination. So why do AI systems hallucinate? Hallucinations occur because machine learning relies on training with example data. For example, AI learns to distinguish images of cats and dogs through a training process, where we provide a large number of cat and dog images, and we have corresponding answers. The data we use for training often does not fully cover the various situations that may arise.

So now, if you ask a question that the training set doesn't cover, it will give a false answer. This is an AI problem. It's particularly important to note that the problem with AI's false answers is that they are similar in form to valid knowledge and valid answers, making it difficult for users to distinguish the difference.

2、

"We don't have truly intelligent systems today

If you look at the real impact of AI on industry, it's almost zero"

Edu Guide: We recently saw some descriptions that if AI systems become more intelligent, the hallucination problem will become more severe. Is this an accurate description?

Joseph: Here, we should understand what the intelligence of an AI system is. Now, if you open a dictionary, you will see these definitions of intelligence in the dictionary, which existed before AI. You will see intelligence defined as the human ability to understand the world and then act to achieve and satisfy their needs and goals.

Now, if you consider an AI system, this means that the AI system can understand and act purposefully. Can AI systems do this, or what kind of AI systems do we have today?

The most common AI systems we have now are systems where you ask questions and get answers, which we call conversational AI systems, such as ChatGPT or Deepseek. But this is not enough. Because a problem with AI systems is that they cannot make decisions. They cannot analyze, make decisions, and set goals. This is a human prerogative.

In fact, AI systems are not equipped with common sense knowledge. You see, humans have a world model, which is a conceptual model of the world that we have developed since birth. We use this model to understand language, and human thinking also goes from perceiving information to understanding, and from understanding to acting. AI systems, in fact, can only process sensor information, but cannot connect this information with common sense knowledge. They cannot do it. I think this explanation is very important.

So I think we don't have truly intelligent systems today. We only have conversational systems. What is missing? Let me explain what is missing. I would say we would have intelligent systems if these systems could act on behalf of humans and perform complex tasks. For example, I work on autonomous driving research, and we always believed that by 2020 we would have autonomous cars. But we haven't developed what was expected. It is very likely that in the future, we will not have fully autonomous cars. Why? Because to drive a car, the system should be able to understand what is happening in a complex environment and make decisions in real time. We don't know how to build such systems.

What I want to say is that today we have conversational AI, so people can play with AI systems, ask questions, and get answers. But we need other types of AI, for example, AI for prediction, AI for analyzing complex situations, AI for industry. If you look at the real impact of AI on industry, it's almost zero.

Edu Guide: As you mentioned, autonomous vehicles, recently in industrial practice, a company called Xiaomi had some bad news. This company offers electric vehicles that support assisted driving. Some drivers, using the assisted autonomous driving function, no longer focus on driving the car but sleep in the car, which led to some accidents.

Joseph: Yes, I know. I think they should prohibit, I mean, more stringent regulatory measures should be taken. Our technology is not yet mature. Trying to verify this technology in real-world environments with pedestrians and other cars is a danger. I mean, people overestimate AI's capabilities. That's what I've been saying.

Now, what we should understand here is that for AI systems, we don't have standards today. I have written a lot about the risks posed by AI systems. Let me try to explain. Our toys, toasters, airplanes, etc., everything we manufacture is certified, safe, has standards, and has international rules. If I buy a toaster and use it correctly, it won't kill me. That's guaranteed. In China, in Europe, in the United States, it's the same.

But for AI systems now, we don't have standards. For autonomous vehicles, we don't have standards. If you have standards, for example, to build an airplane, some certification body will say, oh, this can fly. Now for AI systems, we don't have standards yet.

Now in the United States, they invented a term: self-certification. Self-certification says, oh, no neutral authority will guarantee this. The manufacturer will guarantee this. For example, Tesla says, look, my car is very safe. But this is just its self-attestation. So manufacturers are pushing for other reasons, other non-technical reasons, for people to accept the idea of AI, but if they cause risks without any guarantees, that's too bad.

Edu Guide: Rules are still needed, but in most cases, the industrial practice of technology runs ahead of rules and regulations, doesn't it?

Joseph: Yes, but you need rules. In fact, when we build a system, we need to consider its importance. For example, my laptop is not important. I mean, if it breaks down, no problem. I can use another laptop.

But there are some systems that are very critical, such as airplanes or nuclear power plants, these are very important. If something goes wrong, then human lives are at stake. Or in the medical system, this is also very important. Therefore, we cannot let AI provide correct diagnoses and make decisions without any guarantees. At some point, we should decide: if these systems are involved, we don't accept them. Because we don't understand how they work.

Another thing I want to emphasize—this is very important—is that AI systems are black boxes. We don't understand how they work. I have done a lot of work on avionics systems and flight controllers. If you write software, you can analyze the software and predict that this is a system with very high reliability. But AI systems, we don't understand how they work, so it's very difficult to guarantee their correctness.

Edu Guide: From a safety perspective, fields like autonomous driving and healthcare are very important, so they need higher standards compared to ordinary applications.

Joseph: Yes. If you have an ordinary use, such as something used in an office, if you do something wrong, you have time to think. But if you use it in some critical systems, there are two problems. One problem is that it works without human involvement. The other problem is that these systems are very difficult to analyze and predict.

3、

"The problem with AI is that they generate knowledge we don't understand"

Edu Guide: In "Understanding and Changing the World," you mentioned that human society can be viewed as an information system. What are the key similarities between the two?

Joseph: That's an interesting question. First, we should understand what an AI system is, and what an Agent is. I mean, the concept of an agent exists in philosophy, even before AI. So we call an agent a system that can understand and act. For example, animals can be considered agents, and now we have machines that are also agents, or we are trying to build agents.

An important characteristic of an agent is that it must solve some problems, and it lives in a society. So we have animal societies, human societies, and so on. Therefore, an agent is not isolated; it interacts with its environment and must interact with other agents. This is not enough; an agent must also solve some problems, and to solve these problems, it must cooperate with other agents. This is what we call collective intelligence.

We have interactions between agents, which is what happens in human society. Society is made up of agents. Each agent pursues its own goals. So when agents are human, humans should cooperate to achieve common goals. This is a very important thing. There are rules, or in organized societies, you have laws, you have moral rules, you don't do to someone what you don't want them to do to you.

So now we are studying AI Agents, and when we compare some ideas with human society, we find that intelligent machines and humans have many similarities. In human society, what matters is the information we exchange, and what matters is the level of trust between institutions. What are institutions? Institutions help achieve some common goals of organized society. They define what is true and what is false.

Now in machines, we also have institutions. What are institutions in machines? They can be servers based on information knowledge, assigning tasks or determining machine goals. So between machine societies, this is a very interesting analogy. Because today we have machine societies, or we are trying to build machine societies, and we have human societies. This is also why I try to explain the role of information and the role of knowledge in the book. The role of information lies in people believing in common interests. People trust their government or don't trust their government. People trust each other or don't trust each other. You can try to explore this interesting analogy in my book.

Edu Guide: If AI systems become smart enough to set goals and establish institutions, such as a "government" to help achieve common interests, will there still be a clear boundary between human society and this AI system or information system?

Joseph: These are deep philosophical questions: how different are machines from humans? This also relates to ethical issues. You know, today people talk about moral AI, they want a machine with morals. So what does moral behavior mean? Here, there are also some theories that can help us understand, I mean, what does human behavior mean? I think these are very important questions.

You can see a huge difference between machines and humans in that humans have some internal goals and have decided what those internal goals are so far. For example, the goal of survival. As a human, to survive, when I'm hungry, I have to find food. If I have to find food, I need to do an analysis to say, I want to buy food, do I have enough money in my bank account? How do I buy it, and so on. So humans, when they have goals, they do an analysis. This analysis depends on their state, on their physical condition, health, etc. But it is also subject to external constraints.

To perform this analysis, the human brain has a value system. People always have a value system, such as the economic value system just mentioned. You know how much it costs to buy bread, how much it costs to buy clothes. You also have other value systems, for example, if I do something wrong, I will go to jail, I will pay a fine. People also have other values, moral values, etc.

So humans have built this value system. When I decide to do something or not to do something, I decide based on this established value system. This value system reflects the value system of society. So it is not independent. I hope you can understand this, because our society has a value consensus. For example, the economic value system is clear; the moral value consensus, what is good, what is bad, etc. All societies depend on rules and some kind of value system. Of course, value systems are determined by institutions and governments globally.

Now, my question is, can we equip machines with such a value system? How to develop such systems, and how do we make machines act? Rationality is a very important attribute of human thinking. We choose based on goals, analyze, and try to select the best solution based on the value system we have. For example, you would say, should I cheat, or follow the rules? If I do something wrong, maybe I'll get something, but I also have risks. Every time you make a decision, you choose based on a value system.

This is a very complex human system. Can we equip machines with it? I don't know. We are trying. But these are very important questions that still need to be explored.

Edu Guide: In your opinion, is this a limitation of information systems? Or is it just that technology hasn't advanced enough yet, so AI systems or information systems cannot do better than humans?

Joseph: Here we should explain that AI systems can do better than humans. So let me explain, humans have limited ability to understand complex situations. This is related to cognitive complexity. This is something I explain in my book. Cognitive complexity means that if there is a complex situation with many parameters, for example, if I tell you a story with 20 different characters, you won't remember them, because it's too many. So humans have this limitation, while machines do not.

So machines can possess very complex data that humans cannot understand, and they can distill information, they can distill knowledge from very complex data. This is the capability of machines. They can understand very complex situations by analyzing data. I think we should make full use of this. Because machines can help us understand complex phenomena. Now there are some projects related to complex phenomena, meteorological phenomena, geophysical phenomena, and so on. For example, earthquake prediction projects, we can use AI to predict earthquakes.

Why is AI successful in predicting earthquakes? Because earthquakes are very complex phenomena. They depend on many parameters. And current human theories are limited, or any human scientific theory is limited—these theories can rely on some parameters, but not many, for example, at most 10 different parameters.

Now, AI systems can be trained to handle many parameters. How to train? For example, earthquakes occurring every day around the world. We collect data and train them, and they can connect earthquakes in China with earthquakes in the Philippines. They can make predictions, and there is some experimental data showing this. So we can use AI to predict, and they do better than existing scientific theories.

Of course, the problem with AI is that they have knowledge that we don't understand. This is a problem. For existing scientific theories, because we have designed these theories and understand how they work. We are very certain about the conclusions of these theories. Now, for example, if AI says there will be an earthquake tomorrow, do you believe it? I mean, you have no evidence. AI systems are very good at predicting and analyzing situations. This is their best advantage. This can be very useful in science, for example, in analyzing medical data, they would be very good.

Of course, we should always be careful because they might do something wrong. They are also not good at all at finding new goals. And when they provide knowledge, we don't know how it was generated. This will become a very important problem in the future. Because you will have another kind of science, and this science will provide results that are effective to some extent, but you won't understand why they are effective.

Edu Guide: Are there any specific solutions to address this problem? AI generates so much information that even they don't truly understand, yet it is used by scientists and applied in real-world scenarios.

Joseph: Yes. Suppose someone gives you a lot of information and knowledge, and then says, oh, this will happen or this is true. But you can't guarantee it. That's the problem. So you see, in traditional science, we understand it because scientific knowledge is generated through the use of mathematical models. Therefore, we can very precisely verify whether a theory or scientific result is correct. This is what we cannot do with AI systems.

This is the price you pay. You will get a lot of information, a lot of results. But you should be able to judge for yourself whether these results are usable, or you should consider whether you should train AI systems to produce reliable knowledge. This is an open question. People are now talking about safe AI, responsible AI, and even specialized international conferences. But for now, we don't know what to do. So maybe in the future, this will be possible. But for now, we should be careful.

4、

"AI can evolve smarter, but cannot fully surpass humans"

Joseph: Yes. For me, AI cannot surpass humans. I mean, people say AI becomes smarter than humans, but that's unfounded. So maybe you've seen, they talk about Artificial General Intelligence, AGI. But when you read the news, what they mean by AGI is conversational systems between humans and machines. They don't understand the problem of Autonomy, meaning not only that AI knows more than humans – which is easy to do – but that AI knows as much as humans, but can also wisely process knowledge and organize knowledge to solve problems. This is something AI doesn't know how to do today.

To solve problems, let me be more precise, because there are various problems, you can solve mathematical problems, solve daily life problems. Like driving, like being a doctor, like being a chef, so solving problems means AI being able to replace humans in complex organizations.

Another thing I want to emphasize here is that this is a very difficult problem. It's not enough. Let me consider an example, for instance, one problem with autonomous cars is safety, not having collisions. Even assuming I have a safe autonomous car, that's not enough.

There are some interesting experiments in San Francisco. You know San Francisco, some companies have deployed autonomous taxis. These taxis are not safe, they have accidents. But suppose they are perfectly safe, this is perfect AI, very safe AI. That's not enough, because the problem with autonomous cars is that they don't understand this: now behind them, a police car and an ambulance want to pass, and they block a police car, they block an ambulance, but they will say, I am safe, I have stopped, no accident. This requires social intelligence, this requires collective intelligence, which machines do not have now.

So how can we have such agents? Now each AI system has its own goals, for example, I want to drive from Beijing to Shanghai. This is my goal, but this is my personal goal. When humans drive, on the highway, people should consider the goals of others. And a selfish car will prioritize its own goals, which may not be good for other cars. This is what we call social intelligence, which AI systems have difficulty achieving. This is related to how close AI agents are to human value systems.

Edu Guide: From an ethical perspective, when autonomous vehicles are used in real-world scenarios, if an accident occurs, who should be responsible for some terrible accidents?

Joseph: That's why standards are so important here. If a plane accident occurs, the issue of responsibility is clear. Because the plane is certified according to standards, and those that meet the standards can fly. Therefore, the manufacturer may not be directly involved in accident liability. Now, if an autonomous car, like a Tesla, is not certified, then the manufacturer's responsibility is directly involved. Of course, there are also cases of user abuse of AI. But this is also something regulated by standards. Therefore, these standards define the responsibilities of manufacturers and users. If you don't have standards, then you should conduct a detailed analysis to find out who should be blamed.

Of course, the system is not responsible for this. These systems are designed by engineers, and responsibility lies with the manufacturer. This is clear. Saying that responsibility lies with the system, or with AI, is meaningless. I mean, only stupid people would say that. Because responsibility means that if I do something wrong, I can explain why, and I will assume I will pay the price for it. I might be punished for it. This is the inherent meaning of responsibility. You can't say we'll punish machines. To punish these machines, all you can do is pull their plugs.

Edu Guide: Different situations have different answers.

Joseph: Yes, but if you don't have standards, you cannot define the boundaries of risk responsibility. For another example, you have an AI, and you ask a question, how to build a bomb. If you are smart enough, you can do it based on what the AI tells you. Now, some manufacturers say we are not responsible for this. We cannot control this, but ultimately, they are responsible.

Why are they responsible? Because if they can tell you how to build a bomb, or how to make a fake video for someone. This means AI found this information somewhere. Whether by reading documents, accessing the internet, etc., this should be regulated.

But now you should also understand that this is difficult to regulate. Why? Because technically, how does it happen? You provide a large amount of data to AI. If you don't filter the data, only filtering books, documents, any files, etc., AI can learn something through all of this. As I said, it's a data distiller, you extract data. But if I were to analyze and extract all information about how to make a bomb from this document before submitting it, the development cost for AI would be huge.

This gets technical, but you can't say it's easy to exclude just by using the word "bomb." Because if you exclude the word "bomb," you will also exclude a lot of other information about bombs in war, and a lot of other useful information.

5、

"AI cannot replace education; the role of education is more than just giving you knowledge"

Edu Guide: This is about human-AI system interaction and how to set specific rules within AI systems. Let's turn to the education perspective. In your book "Understanding and Changing the World," you mentioned that society and people don't pay enough attention to informatics. Recently, in some countries in the United States and Europe, governments have made AI courses compulsory in schools. Do you think this is enough? Or do you have any suggestions for incorporating knowledge of this discipline into the education system?

Joseph: There is a general problem with AI-related education now. Of course, students should learn something about AI, and they should learn something about technology. I think there is an important trend today to achieve a balance between teaching knowledge, pure scientific knowledge, and technology. I think this is positive. Maybe you have heard of this trend, about what is called STEM. STEM includes Science, Technology, Engineering, and Mathematics. The idea is that you should learn interdisciplinary courses, which is very good. I think this is also happening in China.

But I also think that now, in education, it is very important to explain the dangers brought by AI to young people and to establish codes of conduct. It is important to explain what the role of machines is and what the role of humans is. In my interview, I discussed a lot about young students' exposure to AI tools. Imagine we have a young student who uses Deepseek to find answers. I think this is terrible. Because a young student who answers questions or writes an essay through Deepseek means that he will not receive a correct and adequate education.

You see, the role of education is not just to give you knowledge. It also trains your mind, trains your spirit, trains your thinking, attention, and creativity. Now, if young children ask Deepseek to solve problems every time they face one, that's terrible.

Education is a training process. Just like you are an athlete, you are a runner, you train yourself every day. You want to perform as well as possible, and now, if in this training, you use a machine to run faster, you can certainly cheat. But that's not the purpose of the sport. Similarly, I think there should be some control in education. I'm not saying it shouldn't be used, but they should use tools as aids, not to directly make decisions or solve problems for them. So we should be very careful.

When talking about technological trends, in my opinion, it is not to go to the other extreme, which is to focus too much on technology rather than science, which can also happen. Of course, I think a very important point in education now is that it explains to what extent humans differ from machines, what the limitations of machines are, and how to use machines.

Edu Guide: A very serious problem is that with AI being able to remember so much, some subject knowledge has become outdated. People don't necessarily need to learn as they did in the last century or last few decades, but these subjects are still taught in universities.

Joseph: Yes. But you see, knowledge, when you learn something, for example, when you learn Chinese history, it's not just facts. Knowledge is not just facts. You learn not just the facts themselves, but also some analysis. In this process, you can understand what evolution means for human society. What is bad and what is good for human society.

Now people say, oh, you don't need to know anything, you just need to ask questions and get answers. But the problem is, first, you should be able to analyze the information you get, and as I said, if you understand history, you will understand how society works, how to behave in social situations, what is right and what is wrong. You learn something, what are human values? You study history, you study animal biology, you study cosmology, this is your process of understanding the world.

For another example, I also did some experiments when I was young. When I was a child, I was learning multiplication tables. Now maybe children won't learn multiplication tables anymore because you have computers, you have calculators, you don't need this. But learning multiplication tables also gives you an understanding of the relationships between numbers, the relationships between quantities. By analyzing the relationships between numbers, you can understand what this means, what analogy is, what is proportionate or disproportionate to do something, or other types of relationships. So knowledge is not just about letting people know something, but also about understanding the relationships between different things. This is a very important thing.

6、

"Today's young people are becoming very materialistic"

Edu Guide: The thinking process is also very important, not just remembering knowledge itself. You also mentioned that today, philosophy, humanities, etc., are being devalued because they are less relevant to people's lives now. How should these disciplines change to better adapt to today's society?

Joseph: This is a very important question. I have spent a lot of time reading philosophy in my life, and I have learned a lot. Of course, I think philosophy should also be taught in schools, because philosophy gives you a complete framework for understanding the world. What is human? How do humans differ from other creatures? And give yourself some interesting basic values.

My criticism of philosophy is that when I was a student, they wouldn't tell you, I mean, in ancient Greek philosophy, the purpose of philosophy was about how to live a valuable life. There have been periods in the development of philosophy where many great philosophers made great contributions, and I admire them. But now, since the early 20th century, we haven't. I mean, we have a lot of confusion. You should have some philosophers. You need philosophy, but you find it's just literature.

For example, as an engineer, I hate uncertain definitions. I need a definition of happiness. But philosophers haven't provided one. This is also what I'm trying to explain, that happiness in life means I have enough freedom. So I have visions, I have goals, I have short-term and long-term goals. I have resources to achieve my goals. I don't need too many choices, because if I have too many choices, I might have problems. So happiness means I have a vision for myself. And then I have the means to achieve it, but not immediately, because it's a continuous struggle. For me, happiness is being able to create, strive, and dream. This is very important. And these are not dreams too far to reach. It's a game.

I say in my book that if you finally achieve the goal of having nothing to do, you won't be happy. So I think people should understand that if you don't have dreams, if you don't strive for something, your life will be sad.

This is also the danger of AI. The danger of AI is that it becomes a commodity. You are available at any time. I've seen young people say, oh, let's ask Google what to do, what to do on vacation. That's stupid. Because you don't happily say, I dream of going to Venice, spending my summer in Venice. This is people transferring responsibility. Of course, you also have the responsibility to manage your freedom. If you want to do something, you have to make choices. Now, if you don't want to play this game, which is about freedom, then of course you don't have the fun of this game either.

So, this is the problem with AI and computers, it's not that they become smarter and then we can do nothing, I mean, that will never happen. The problem now is that we transfer the responsibility for decisions because it's convenient, and humans are too lazy. If someone can help you do these things, you won't do them. In fact, for physical tasks, people do have limitations, I can be lazy, but not for intellectual tasks.

When it comes to the problem of choosing freedom, in my lectures, I tell students, imagine you have a very powerful slave who can help you achieve whatever you want. Like in Arabian myths, you have a genie, you clap your hands, and the genie comes. "Master, what do you need me to do for you?" The genie can do anything.

So now, who will become the slave? If anything you want is immediately satisfied, what will your happiness be? So people should deeply understand the concept of happiness and how to achieve it. Unfortunately, people are becoming too materialistic now, and they don't care about decision-making responsibility, and AI is pushing this.

7、

"If AI impacts old structures, we should figure out how to arrange everything"

Edu Guide: Understanding what happiness and responsibility mean for ourselves is actually not an easy question.

Joseph: Yes, this should be explained to people. We can offer an applied philosophy course. An interesting situation is that you can have a lot of money, a lot of freedom. But you can also see that some such people commit suicide. Why? Because they don't know what the problem is. I guess you know the French philosopher Sartre, who said that the problem of human beings is not knowing how to manage their freedom. If there are too many degrees of freedom, people go crazy. So the problem is how to find the right balance.

Morality is also very important in this. Morality doesn't just come from religion. It comes from very practical considerations. Because if you analyze ethics and morals, they say this is forbidden, and then with this prohibition, I say I don't smoke or do other forbidden things. Accepting this limitation is my choice.

Now, what is a good balance? This depends on how strong you are in this game of freedom. But some people, from birth, they say, I don't do this. They live a quiet life. Maybe not very exciting, but life is peaceful. And then there are also those who go crazy because they are rich. They have many choices. They don't know how to choose, or they will do other things, this is the problem of freedom. People should understand this game, and this should also be explained, and what it means to live in a society.

What about the problem of trust? What about the problem of responsibility? What about the problem of conflict? I spent a lot of time trying to understand what it means to have a conflict with someone. Because if you understand the meaning of having a conflict with someone, then you will also understand the meaning of resolving a conflict. These are very simple concepts that we study in systems. I think this idea should also be applied to education.

So what is conflict? It means I want to do an action, I need a resource, another person wants to do an action, and needs the same resource. If I take the resource, you don't have it. So we should agree on how to use the resource. This is simple. In society, all laws about morality regulate how to resolve conflicts. For example, traffic rules explain how to resolve conflicts. What happens at an intersection if people don't obey moral laws? This is very important. If conflicts cannot be resolved, the peace that today's society enjoys will collapse and be threatened.

So people should understand this. Some of us say, okay, let's forget everything from the past. Let's forget traditions. I like this about China, the Chinese adhere to traditions. The concept of family is very important. This is a structure. This is something that is not understood, at least in other societies. That's too bad. This is the fact that we are destroying old structures, and there are no other structures to replace them. That's too bad. This is also a result of AI at many different levels.

So what I want to say is, this is why AI is a upheaval. In social structures, for example, platforms, I mean platforms can be great, e-commerce is a good thing, but you should understand that by doing this, we break traditional structures. Because someone had a grocery store somewhere in the suburbs, but now all of that has been replaced.

I mean, I'm not against platforms, but it means there are some risks with AI. Governments should consider how to resolve social changes or changes in the job market if they occur. I won't blame AI, I'm not against technology. Technology can be used for good or for bad. If innovative technology is more efficient, it's good to adopt it, but efficiency itself is not the best goal. If we decide to use it to improve efficiency, and this has some impact on existing structures, we should figure out how to arrange everything. This is also the role of government and society.

+++++++++++++++++++++++++++++++++++++++++++++++++++++

Classic New Book Recommendation:

Decoding the Source Code from Turing to AI – Walking with Computing Pioneers, Defining a New Era of Digital Intelligence! This book introduces 76 Turing Award laureates and their work, achievements, and contributions. Through their introductions, one can witness the development history of a branch of computer science. This book leads us to experience this magnificent and turbulent history. You can scan the QR code in the image to purchase.

Image

ImageImage

Featured Articles:

1.Turing Award Laureate Yoshua Bengio's Latest Speech at Zhiyuan Conference: Regarding AI, I Changed My Beliefs and My Research Direction

2.Turing Award Laureate Richard Sutton's Latest Speech at Zhiyuan Conference: Welcome to the Era of Experience!

3.Nobel Laureate, AI Godfather Hinton's Academic Lecture: Turing Believed in Another Kind of AI, Backpropagation is Better than the Human Brain, Open Source Models Will Bring Fatal Danger to the World

4.Turing Award Laureate LeCun Slams Silicon Valley's Arrogance! Explosive Long Article: DeepSeek R1-Zero More Important than R1, Key to Breaking the AGI Deadlock

5.Turing Award Laureate, AI Godfather Bengio: OpenAI Will Not Share Superintelligence, But Will Use It to Destroy Others' Economies

6.AI Godfather, Turing and Nobel Laureate Hinton Interviewed by CBS: AI is Now a Cute Little Tiger Raised by Humans, Beware of It Backfiring

7.Turing Award Laureate Bengio Predicts o1 Cannot Reach AGI! Nature Authoritative Interpretation of AI Intelligence's Astonishing Evolution, the Ultimate Boundary is at Hand

8.Quickly Abandon Reinforcement Learning?! Turing Award Laureate, Meta Chief AI Scientist Yann LeCun Calls Out: Current Reasoning Methods "Cheat," Scaling Large Models is Meaningless!

9.Turing Award Laureate Yann LeCun: Large Language Models Lack Understanding of the Physical World and Reasoning Ability, Cannot Achieve Human-Level Intelligence

10.Turing Award Laureate Geoffrey Hinton: From Small Language to Large Language, How Does AI Truly Understand Humans?

Main Tag:AI Ethics

Sub Tags:AI LimitationsEducation and AISocietal Impact of AIAutonomous Systems


Previous:Breaking! Meta Open-Sources Its Latest World Model

Next:SRO Architecture Empowers Qwen-2.5-VL's Reasoning Capability, Boosting Performance by 16.8%

Share Short URL