Tech giants say artificial intelligence will soon match the powers of humans’ brains. Are they underestimating us?
Cade Metz May 16, 2025
Sam Altman, the chief executive of OpenAI, recently told President Trump on a private call that the technology would arrive before his term ended. Dario Amodei, the chief executive of Anthropic, OpenAI’s main competitor, has told podcasts multiple times that it could arrive even sooner. Elon Musk, the tech billionaire, has also said the technology could be here by the end of this year.
Like many other voices in Silicon Valley and beyond, these executives are predicting the imminent arrival of artificial general intelligence, or AGI.
The term “AGI” was pasted on the cover of a book by a group of fringe researchers in the early 2000s that described the autonomous computer systems they hoped to one day build. Since then, it has been shorthand for a future technology that will reach the level of human intelligence. There is no fixed definition for AGI, only a tantalizing idea: an artificial intelligence capable of matching the many abilities of the human mind.
Mr. Altman, Mr. Amodei and Mr. Musk have long chased the goal, as have executives and researchers at companies like Google and Microsoft. Thanks in part to their enthusiastic pursuit of this ambitious idea, they have developed technologies that are changing how hundreds of millions of people do research, create art and write computer programs. Today, these technologies promise to remake entire industries.
But since the arrival of chatbots like OpenAI’s ChatGPT and the continuous improvement of these strange and powerful systems over the past two years, many tech experts have grown bolder with their predictions about when AGI will arrive. Some have even said that once they achieve AGI, a more powerful creation called “superintelligence” will quickly follow.
These forever-confident voices are predicting a near future, and their speculation is outpacing reality. While their companies are driving the technology forward at a dazzling pace, a chorus of far more measured voices are quick to dismiss any talk that machines will soon match human intelligence.
“The technology we are building now is not enough to get there,” said Nick Frosst, a founder of the A.I. start-up Cohere, who was a researcher at Google and studied under some of the most respected A.I. researchers of the past 50 years. “What we are building now is something that can take words and predict the next most plausible word, or take pixels and predict the next most plausible pixel. That is very different from what you or I do.”
More than three-quarters of people surveyed in a recent poll by the Association for the Advancement of Artificial Intelligence, a 40-year-old academic organization whose members include some of the most respected researchers in the field, said the current methods for building AGI technology were unlikely to result in AGI.
Part of the split is because scientists cannot even agree on how to define human intelligence, arguing over the merits and demerits of IQ tests and other benchmarks. Comparing the human brain to a machine is even more subjective. That means how you define artificial general intelligence, or AGI, is largely in the eye of the beholder. (Last year, in a high-profile lawsuit, Mr. Musk’s lawyers said AGI already existed, because OpenAI, one of Mr. Musk’s chief rivals, had signed a contract with its primary financier pledging that it would not sell products based on AGI technology.)
And scientists have no definitive proof that today’s technologies can do some of the simpler things the brain can, like recognize sarcasm or feel empathy. Claims that artificial general intelligence is just around the corner are based on statistical inference — and wishful thinking.
According to various benchmarks, today’s technology is continuously improving in several important areas, like math and computer programming. But these tests describe only a fraction of human abilities.
Humans know how to navigate a messy and constantly changing world. Machines struggle with the unexpected — the large and small things that are unlike anything that came before. Humans can devise ideas that have never been seen before. Machines typically repeat or augment things they have already seen.
For these reasons, Mr. Frosst and other doubters believe that for machines to reach human-level intelligence, it will require at least one big idea that the world’s tech experts have not yet conceived. There is no telling how long that will take.
“The fact that a system is better than people at some task doesn’t mean it’s better at everything else,” said Steven Pinker, a cognitive scientist at Harvard University. “There is no omniscient machine that automatically solves every problem, including problems we haven’t even thought of yet. It’s easy to fall into magical thinking. But these systems are not miracles. They are very impressive gadgets.”
“The A.I. Can Meet the Goal”
Chatbots like ChatGPT are driven by what scientists call neural networks, mathematical systems that can identify patterns in data like text, images and sounds. By finding patterns in, say, massive collections of Wikipedia articles, news reports and internet chats, these systems can learn to generate human-like text on their own, such as poems and computer programs.
This means the systems can improve at a much faster pace than older computer technologies. For decades, software engineers built applications by writing code line by line, a step-by-step process that could never create anything as powerful as ChatGPT. Because neural networks can learn from data, they can reach new heights — and reach them quickly.
After witnessing the improvements in these systems over the last 10 years, some tech experts believe that progress will continue at the same pace — to AGI and even beyond.
“All these trends point to the limits dissolving,” said Jared Kaplan, the chief scientist at Anthropic. “A.I. is very different from human intelligence. It is much easier for humans to learn new tasks. They don’t need lots of practice like the A.I. does. But eventually, with more practice, the A.I. can meet the goal.”
Among A.I. researchers, Dr. Kaplan is known for publishing a seminal academic paper that described what are now called scaling laws. The essence of these laws: The more data an A.I. system analyzes, the better its performance. Just as a student learns more by reading more books, an A.I. system can discover more patterns in text and learn to mimic the ways humans stitch together words more accurately.
In recent months, companies like OpenAI and Anthropic have nearly exhausted all the English-language text on the internet, meaning they need a new way to improve their chatbots. So they have leaned more heavily on a technique scientists call reinforcement learning. Through this process, a system can learn behaviors by trial and error, a process that can last weeks or even months. By, say, solving thousands of math problems, it can learn which tricks lead to correct answers and which do not.
Thanks to this technique, researchers like Mr. Kaplan believe that scaling laws (or something like them) will continue. As the technology tries and learns across a vast array of domains, researchers said, it will follow a trajectory similar to AlphaGo, which was built by a team of Google researchers in 2016.
Through reinforcement learning, AlphaGo mastered the game of Go, a complex Chinese board game similar to chess, by playing millions of games against itself. That spring, it defeated one of the world’s best human players, stunning the A.I. world and beyond. Most researchers previously believed that A.I. was still 10 years away from this achievement.
AlphaGo played the game in ways humans had never conceived, teaching top human players new and strategic ways to attack the ancient game. Some believe that systems like ChatGPT can make the same leap, reaching the level of artificial general intelligence, or AGI, and then becoming superintelligent.
But games like Go operate within a narrow and finite set of rules. The real world is bound only by the laws of physics. Modeling the entire real world is far beyond the abilities of today’s machines, so who can be sure that artificial general intelligence — let alone superintelligence — is just around the corner?
The Gap Between People and Machines
Undeniably, today’s machines already exceed human brains in some ways, but this has been true for ages. Calculators perform basic math faster than humans. Chatbots like ChatGPT write faster, and as they write, they can instantly draw on more text than any human brain can read or memorize. On some tests involving advanced math and coding, these systems perform even better than humans.
But humans cannot be reduced to these benchmarks. “There are many types of intelligence in nature,” said Josh Tenenbaum, a professor of computational cognitive science at M.I.T.
One obvious difference is that human intelligence is tied to the physical world. It extends beyond words, numbers, sounds and images to tables and chairs, stoves and frying pans, buildings and automobiles, and everything else we touch in our daily lives. Part of intelligence is knowing when to flip a pancake that sits in a frying pan.
Some companies have started training humanoid robots in much the same way others train chatbots. But this is much harder and more time-consuming than building ChatGPT, requiring vast amounts of training in physical labs, warehouses and homes. Robotics research lags chatbot research by many years.
The gap between people and machines is even wider. In both the physical and digital worlds, machines still struggle to match the harder-to-define parts of human intelligence.
The new methods for building chatbots — reinforcement learning — work well in areas like math and computer programming, where companies can clearly define what constitutes good and bad behavior. Math problems have irrefutable answers. Computer programs must compile and run. But the technique does not work well with creative writing, philosophy or ethics.
Mr. Altman wrote recently on X that OpenAI had trained a new system that was “good at creative writing.” He added that it was the first time he had been “genuinely emotionally moved by something an A.I. wrote.” Writing is one of the things these systems are best at. But “creative writing” is hard to measure. It takes different shapes in different situations and presents traits that are hard to explain and even harder to quantify: sincerity, humor, honesty.
When these systems are deployed into the world, humans tell them what to do and guide them through novel, changing and uncertain moments.
“A.I. needs us: these creative beings that keep producing, that keep feeding the machines,” said Matteo Pasquinelli, a professor of philosophy of science at Ca’ Foscari University of Venice. “It needs the originality of our minds and our lives.”
Thrills and Fantasies
The arrival of artificial general intelligence, or AGI, is exciting inside the tech industry and out. The human dream of creating an artificial being stretches back to the myth of the golem, which emerged in the 12th century. Works like Mary Shelley’s Frankenstein and Stanley Kubrick’s 2001: A Space Odyssey grew out of this fantasy.
Many of us now use computer systems that can write like us and even speak, so it is natural to assume that intelligent machines are just around the corner. It is what we have been expecting for centuries.
A group of academics founded the field of artificial intelligence in the late 1950s convinced that they could soon build computers that could simulate the human brain. Some believed that within a decade, machines could defeat the world chess champion and discover math theorems on their own. None of that happened in that time frame. Some of it has yet to happen.
Many of the developers in the tech world today believe they are achieving some kind of technological destiny, pushing toward an inevitable scientific moment, like the invention of fire or the birth of the atomic bomb. But they cannot offer a scientific reason that the moment is just around the corner.
For this reason, many other scientists believe artificial general intelligence cannot be achieved without a new idea — something that goes beyond powerful neural networks that simply find patterns in data. This new idea could arrive tomorrow. But even if it does, it will take years for the industry to develop it.
Yann LeCun, the chief A.I. scientist at Meta, has dreamed of building what we now call AGI since he was 9 years old and watched the 70-millimeter widescreen Panavision film “2001: A Space Odyssey” in a Paris movie theater. He is also one of three pioneers who shared the Turing Award, considered the Nobel Prize of computing, in 2018 for their early work on neural networks. But he does not see the arrival of AGI as a sure thing.
At Meta, his research lab is exploring outside the boundaries of the neural networks that have captivated the tech industry. Mr. LeCun and his colleagues are searching for the missing idea. “Whether the next generation of architectures can achieve human-level A.I. in the next 10 years is critical,” he said. “Maybe not. It is not guaranteed yet.”
Cade Metz is a reporter for The New York Times covering artificial intelligence, autonomous vehicles and machines