Altman Reveals Stunning Prediction: GPT-8 to Cure Cancer by 2035! Humanity Might Wage WWIII Over Compute Power

After the release of GPT-5, it unsurprisingly caused a huge stir across the internet.

The cry of "Give me back GPT-4o" swept across the web like a tsunami!

Image

Even Karpathy agreed that GPT-5 was indeed somewhat disappointing.

Finally, amidst overwhelming criticism, Altman quickly admitted his mistake, stating he would immediately bring GPT-4o back!

Image

Image

Fortunately, for friends missing GPT-4o, simply go to settings and enable "Show legacy models" to see previous models in the dropdown menu.

Image

Although the launch of GPT-5 was particularly rocky, OpenAI employees were all beaming with joy.

Because Altman had just decided to issue huge bonuses, ranging from hundreds of thousands to millions of dollars, to 1,000 employees (about a third of the company's total) in the technical research and engineering teams!

The specific scale of the bonuses will depend on employee performance, position, and seniority, and can be received in cash or stock.

Right after GPT-5's release yesterday, Altman immediately sat down for an interview with renowned YouTube host Cleo Abram.

Image

A few years ago, GPT-5 felt as distant as science fiction, but today, we are in the most dangerous global race in history. Hundreds of billions of dollars invested, incredible human effort… this moment is profoundly significant.

In this interview, Altman directly addressed many of the most controversial current issues.

This time, he revealed many astonishing statements again—

In 27 years, AI will make major scientific breakthroughs.

By 2035, GPT-8 will very likely cure cancer.

Image

In the industrial revolution explosion of 2050, humanity might spark World War III over the scramble for AGI compute power.

However, in his view, the probability of AI destroying humanity is 1%.

He also believes that today's fresh graduates are the luckiest generation in history. Because there has never been an era where a company founded by one person could be valued at over a billion dollars.

GPT-8 Era: AI Could Cure Cancer

Altman expressed great confidence in AI healthcare by 2035.

Many people have already experienced it: they had a life-threatening illness that no doctor could diagnose, but by inputting their symptoms and blood test data into ChatGPT, it told them what rare disease it was.

Following the guidance and taking the medication, they surprisingly recovered.

Now, GPT-5 is even more powerful in health inquiries. Its answers are more accurate, with fewer hallucinations, and it's more likely to tell you the real cause of your illness and what to do.

Altman stated he believes that by 2035, AI will cure or at least treat many of the diseases that currently plague humanity.

For example, in the GPT-8 era, we could ask it to "cure a certain cancer."

It would first read all existing research and data, propose some treatment ideas. Then it would tell us: "I need you to find a lab technician to do these 9 experiments and report the results back to me."

After two months of cell culture, the lab technician sends the results back to GPT-8, and it might say: "Hmm, there was an unexpected finding here, I need one more experiment."

Then it would tell you: "Go synthesize this molecule and test it on mice." If effective, proceed to human trials.

Finally, it would say: "Okay, here's the process for FDA submission."

For anyone who has lost a loved one to cancer, this is what they dream of.

Image

2050: 10x Industrial Revolution Explosion, Who Will Be Hurt?

The host asked: If by 2050, we experience a societal transformation like an industrial revolution, but ten times larger in scale and ten times faster, who will be hurt in this process?

Altman responded that we are in uncharted waters, and this transition period might be very fast, but the pace of societal change won't be as rapid as technology.

During this process, many jobs will disappear, many jobs will significantly change, and entirely new jobs will emerge.

In another sense—we also completely do not know how extreme this speed and breadth will be.

Therefore, we need an unusual humility to openly consider new solutions that might "not normally enter public discussion."

That is to say, we might need entirely new social contracts. For example, we need to rethink how the most important resources of the future should be shared.

Image

From Altman's perspective, the best approach is to make AI compute as abundant and cheap as possible, so much so that it's limitless. If that can't be achieved, humanity might truly wage war over compute power in the future.

So the key question is—how to distribute the right to access AGI compute.

Perhaps one day, OpenAI's systems will "output more words" daily than all of humanity. Currently, people already send billions of messages to ChatGPT daily, making decisions based on them.

And a small change by a researcher to a model's "way of speaking/personality" could affect a vast number of conversations. This is an extremely immense power.

Everything is happening too fast; we must seriously consider what it means to alter a model's personality on this scale.

Image

GPT-5 and GPT-4, What's the Real Difference?

The host posed a sharp question: What can GPT-5 do that GPT-4 cannot?

The reason for asking this is that not long ago, Altman had said, "GPT-4 is the last time in our lives we'll use such a 'dumb' model." Yet, it has already outperformed 90% of humans on SAT, LSAT, and GRE scores.

To this, Altman replied: AI systems can do these amazing things, but they cannot replicate human abilities in many areas, and tests like the SAT are actually very limited.

What Excites Altman Most About GPT-5

Altman stated that what excites him most about GPT-5 is, "I can ask it almost any difficult scientific or technical question and get a pretty good answer."

For example, when he was in eighth grade, he spent a long time creating a Snake game on his TI-83 graphing calculator, which quickly became popular throughout the school.

However, when he used an early version of GPT-5, it created the game in 7 seconds… and then, whether adding new features or changing the interface, it could complete it in seconds.

Altman once worried if this would make children miss out on the "Stone Age" struggles of programming.

But now he is completely excited for them: this on-demand creation of almost instant software is one of the most iconic features of the GPT-5 era, something that was not present in the GPT-4 era.

The host also asked: You've been using GPT-5 for a while now, what's the most impressive task it has done for you?

Altman said it was programming tasks. "GPT-5's programming ability makes me feel—it can do almost anything."

Currently, it just can't physically interact with the real world directly, but being able to make computers perform extremely complex tasks turns software itself into a powerful "control lever."

2027: AI to Usher in Major Scientific Breakthroughs

The next question came from Stripe CEO Patrick Collison: What's after GPT-5? Will LLMs make a major scientific discovery within a few years?

Image

Altman stated he'd bet that by the end of 2027, AI would certainly make a scientific discovery widely recognized as important by most people.

The only thing missing now is the model's cognitive ability.

For example, recently, OpenAI's model won a gold medal at the IMO. The IMO has 6 problems to be completed in 9 hours.

However, proving a significant new mathematical theorem might require top talent to spend 1000 hours. According to current growth curves, models are very likely to reach this by 2027, and the key lies in continuing to expand model scale.

Image

1899: Could AI Have Discovered General Relativity?

Next, the host posed a rather interesting question.

Suppose we went back to 1899, gave AI all the physics knowledge of that time, and let it extrapolate from that. When would it have proposed general relativity?

If no more physics data were provided, could superintelligence solve high-energy physics problems just by relying on existing data? Or would it have to build new particle accelerators?

Altman replied, I guess simply relying on existing data for pure thought wouldn't be enough; new equipment would have to be built, and new experiments conducted. Major breakthroughs are inseparable from these, because real-world processes are slow and complex.

In short, if divided by time scale, current AI has already surpassed humans in one-minute tasks, but it still has a long way to go to reach "one-thousand-hour tasks."

Jensen Huang's Question: What is Truth?

The next question came from NVIDIA CEO Jensen Huang:

Fact is What is, truth is What it means. The former is objective, the latter is subjective, depending on perspective, culture, values, beliefs, and background.

An AI can learn and understand facts, but how can it know what truth means for every person and every background in the world?

Altman stated that he is often surprised by AI's fluidity in adapting to different cultural contexts and understanding individual differences.

Image

For example, ChatGPT's previously launched "enhanced memory" feature allows it to understand our life experiences and backgrounds, knowing what brought us to today.

He has a friend, a heavy ChatGPT user, who had input a lot of content related to his life. Once, he asked ChatGPT to pretend to be him and take a personality test.

The result was unexpected: it was almost identical to his own!

Altman himself also feels that ChatGPT has learned a lot about his cultural background, values, and life experiences. At this point, experiencing a "clean" version of ChatGPT without any historical record feels completely different.

In the future, perhaps people from different regions will use AI with different "cultural calibrations"—the underlying model will be the same, but with different contextual inputs, it will behave in ways expected by users, communities, or even countries.

Today's Graduates Are the Luckiest Generation in History

The host mentioned that many AI CEOs are now saying that within 5 years, half of entry-level white-collar jobs will be replaced by AI.

If I were a fresh graduate in 2035, what would I want the world to be like?

Altman said that AI may indeed cause many jobs to disappear, but it is also very likely to lead to many completely unexpected new professions.

For example, some people might directly perform missions to explore the solar system, traveling in spaceships to do unprecedented work—both high-paying and interesting.

People then might pity us today, because the jobs we do are so boring and backward.

Image

A 10-year time span is already hard to imagine. At most, we can imagine up to 2030.

Altman stated that he actually doesn't worry about young people; they are often the most adaptable to changes like the complete disappearance of entry-level jobs.

Instead, he worries more about those aged 62, who are about to retire and don't want to learn new skills.

In short, "If I were a 22-year-old fresh graduate right now, I would feel that I am the luckiest generation in human history."

Because there has never been an era where you could create entirely new things, start a business, and have a huge impact at such a low cost.

In this era, stories of "a company founded by one person eventually worth over a billion dollars" will frequently unfold.

Subtly Criticizing Musk: We Won't Use Sexy Robot Avatars

Altman stated that the current bottlenecks facing AI mainly fall into four categories—compute, data, algorithms, and product.

First, it's foreseeable that compute demand will sharply increase again after GPT-5 launches, and we won't be able to meet it.

The next area requiring the most effort is scaling up compute infrastructure. From millions of GPUs, expanding to tens of millions, hundreds of millions, and eventually billions.

The biggest challenge among these is energy. Running a gigawatt-scale data center is difficult because it's hard to find such a large amount of electricity in the short term.

At the same time, OpenAI is also limited by the supply of GPUs and memory chips, and how to package and assemble them into racks—these are all challenging issues.

The ideal state is to be able to build robots to participate in the automated construction of the entire data center.

Image

The second bottleneck is data.

We are entering a new phase: models need to learn things that don't exist in any current dataset—they have to discover new things.

So how do you "teach" a model to discover? Humans do it by: proposing hypotheses, conducting experiments, and updating cognition based on results; likely, we will also proceed along this line of thinking.

The third is algorithm design.

What OpenAI does best is establish a culture of consistently achieving significant algorithmic research gains.

First, they summarized the path of the GPT paradigm, and then they developed the inference paradigm.

Altman stated he is very excited about the possibility of even more orders of magnitude of algorithmic gains to come.

For example, because they found algorithmic gains in inference, GPT-oss can run on a laptop; this kind of "algorithmic dividend" is probably the most gratifying part of OpenAI's work.

And the fourth is product.

Productization is critical: scientific breakthroughs alone are not enough; they must be put into people's hands to evolve together with society and form feedback loops.

This also relates to the trade-off between "doing the right thing for the world" and "winning the race."

Here, Altman subtly criticized Musk: we could also pursue many short-term growth-accelerating approaches, but they would conflict with our goal of staying aligned with users.

For example, we haven't put "sexy robot avatars" in ChatGPT. While that might increase engagement time, it's inconsistent with the direction we are committed to.

Many people say ChatGPT is their most liked and trusted technology; this relationship is precious, and we won't easily sacrifice it.

Image

Will AI Destroy Humanity?

Now, regarding the advent of AI, many people are divided into two camps—the "arrivalists" and the "doomsayers."

Some believe that AI will be very useful for our future, while others who are also building AI tools say it will destroy all of humanity.

Image

Altman stated that he finds it difficult to empathize with the latter; perhaps they have some psychological issues.

If it were truly going to kill off humanity, he wouldn't be busy building it.

His belief is: "AI has a 99% chance of being extremely good, but a 1% chance of being catastrophic; I want to push that 99% towards 99.5%."

Image

Altman stated that he has been an AI enthusiast since childhood, went to college to study AI, and joined an AI lab. But when he was in college, this prospect was still far off.

It wasn't until 2012, when Ilya participated in the AlexNet paper publication, that he first felt this path was viable.

At the time, he was puzzled: why hadn't the whole world noticed this yet? As long as the scale increased, it would work.

Now, reality has indeed confirmed his conjecture.

And in the future, even a year from now, it will be hard to imagine what might happen.

References:

https://youtu.be/hmtuvNfytjM

Main Tag:Artificial Intelligence

Sub Tags:OpenAIHealthcare AIAI EthicsFuture TechnologySam Altman


Previous:ARPO: Agentic Reinforced Policy Optimization, Enabling Agents to Explore One Step Further at Critical Moments

Next:Xiaohongshu Open-Sources First Multimodal Large Model, dots.vlm1, Performance Rivals SOTA!

Share Short URL