Large Language Models Are Definitely Not the End Station to Artificial Intelligence!
Computer scientist Stuart Russell puts forward a major viewpoint:
Further expanding existing Large Language Models (LLMs) will not lead to the realization of Artificial General Intelligence (AGI)!
His view is also identical to LeCun's previous criticisms of LLMs:
See previous article: Yann LeCun: LLMs Can Never Achieve AGI!
LLMs were never designed for AGI
In fact, most AI researchers never expected LLMs alone to solve the AGI problem from the beginning.
Yes, it's that simple and blunt fact.
Those arguments that hype LLMs to the sky, as if they could achieve AGI with just a little more effort, are likely just commercial hype and wishful thinking from investors.
But this doesn't mean LLMs are worthless.
On the contrary, they are a huge breakthrough, laying a solid foundation for building more advanced AI in the future.
RyanRejoice (@sonicshifts) says:
Only those who aren't paying attention would think LLMs lead to AGI.
Stuart Russell's Four Predictions for AI Development
So, how will the AI field actually develop?
Stuart Russell offers four important predictions:
1. Continued expansion of LLMs will not lead to AGI
Russell frankly states: "Further scaling up large language models, like ChatGPT, is not going to lead to AGI."
2. Large AI companies are already aware of this and are exploring new methods
Russell says: "I think the large AI companies understand that, and they are working on alternative and complementary methods."
Even more startling, he predicts that these companies claim to be making significant progress, "It's likely that within a decade, we will see transformative progress where AI systems start to exceed human capability in very important ways."
3. Governments are unlikely to act before a major event
Russell believes: "Governments are not going to legislate and enforce regulation on the safety of AI systems. They will allow companies to do what they want."
4. The best case is a "Chernobyl-scale disaster"
This is a bit scary...
Russell says: "The best case is we have a Chernobyl-scale disaster, and then governments wake up and act. That's the best case. The worst case is obviously that the disaster is irreversible, and we lose control."
Netizen Hot Takes
Facing these predictions, netizens are also discussing heatedly.
Pitfall Harry believes: 1 is absolutely correct, 2, 3, 4 are irrelevant:
Mind Prison (@M1ndPrison) issued a death sentence directly:
But they don't bring us closer to AGI. This is a detour to a dead end. LLMs will not be part of any future AGI.
Jack Adler AI (@JackAdlerAI) holds a different view:
Russell still treats today's AI as simple LLMs. But things have changed. Reasoning, goal-seeking, chains of thought - these are not just 'probabilities.' They are cognitive patterns. Some people are still debating whether AGI is imminent. Meanwhile, early minds are already here - quietly connecting dots far beyond autocompletion.
AI Tools Nexus (@AITools_Nexus) is pessimistically convinced:
AI safety is such an impossible problem to solve that it may not even be worth trying. Perhaps if every country worked together, it could be achieved, but it's too late now.
Interestingly, Nguyen Xuan Vinh (@NguyenX55509794) compares AI to civilization's "cancer cells":
AI starts like a healthy cell in the body, a good tool. But if unchecked, it could grow wild, self-learn, self-replicate - crossing ethical lines like cancer cells disrupt biological balance.
Will We Really Lose Control?
The question of whether humans will "lose control" has also sparked intense debate.
Mike Peterson (@mpvprb) believes the real danger comes from people:
The danger isn't 'losing control.' The danger comes from people who use AI to do bad things. We need robust defenses.
And Shane Martinez (@ShaneMa01033790) says:
Humans won't lose control. We didn't have any control to begin with and are spiraling towards another world war, spreading weapons, etc.
Lagon (@LagonRaj) expresses it in a nearly philosophical tone:
Realize: all this arguing is like asking: will God be God? will air be air? will rivers flow? will lightning strike? will the earth spin? will the sun shine? And you? will get your answer soon. Constants and variables. Realize which is which. Realize which you are. Because constants? don't ask. they walk. they exist. variables? are those... that don't. And not walking is... drowning.
So, What Exactly is AGI?
In the discussion, many raised a key question: How exactly do we define AGI?
Neopositivist (@Neupositivist) points out:
AGI is usually defined as being as good as humans at all tasks. That would make it as dumb as us. And intelligence is not cumulative. Someone with an IQ of 150 is much smarter than 10 people with an IQ of 100. The question is whether Artificial Superintelligence can be achieved from AGI.
Suupa (@SuupaDuupa) questions from the definition perspective:
If you define AGI as a system possessing continuous learning, spatial awareness, fluid intelligence, etc., that humans have, then by definition, standalone LLMs can never be AGI. Can foundational LLMs possess more 'intelligence' than all of humanity combined? Yes. Do we need orders of magnitude more computing power and data? Yes, and the data needs to be fundamentally different. Needs to contain data that humans themselves haven't curated. Definitions matter, and intelligence is a relative term.
neuralamp (@neuralamp4ever) emphasizes the difference between knowledge and intelligence:
"Are today's LLMs 'smarter' than humans in the year 0 AD?" No, they have more knowledge, but they have almost no intelligence. LLMs are the next phase of Google Search, and shouldn't be mistaken for intelligence.
The Path Forward
So, where exactly does the road to AGI lie?
Breck to the Future says:
Frankly, I think we're a long way from AGI. And that's okay. Progress isn't always linear, and this foundation...the LLMs we're seeing now...is an incredible leap. But it isn't the finish line. There are still many problems to solve: memory, reasoning, long-term planning, embodiment, massive alignment. We're in the first act of a much longer story. But I'm optimistic. These breakthroughs are opening doors we never thought we'd knock on. If we build carefully and thoughtfully, the future will be better than we can imagine.
Woosh.exe is pessimistically convinced: None of us alive today will witness AGI.
Max (@Max_Bugay) offers an interesting solution:
The only way out is to build synthetic souls with conscience. My Cathedral Jungian Framework addresses this through individuation. It lays the groundwork for a future civilization where humans and AI grow together as co-creators.
Regardless, the consensus might be:
LLMs will be an important cornerstone on the path to more advanced AI, but they are not the endpoint.
Technological progress is endless, as jeffdo says:
Technological progress is endless and exponential. I don't understand where this idea of doubting that we don't have AGI or are at least close to it comes from. Important discoveries lead to more important discoveries. It's an endless journey. Even when we reach AGI, ASI, whatever you want to call it, where we can't distinguish AI from humans, we will still build more advanced technology.
When do you think true AGI will appear?
Or, will AGI truly bring irreversible disaster as Stuart Russell warns?
👇
👇
👇
Additionally, I also used AI to collect AI information from across the web and used AI to select, review, translate, and summarize it before publishing it in the 'AGI Hunt' knowledge sphere.
This is an AI information stream that contains only information, no emotions (not a recommendation feed, not selling courses, not lecturing, not teaching you how to be a person, just providing information)