Sam Altman only publishes 2-3 blog posts a year.
The "The Gentle Singularity" post released yesterday, like Ilya's speech the day before yesterday, uses plain language, no grand words, unlike some big V's who show off "advanced" "cognition."
However, if you, like me, read it 5, 6, 7, 8 times, you will find it incredibly valuable for practical guidance.
title: the gentle singularity
author: sam altman
date: 2025-06-10
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
We have crossed the "event horizon," and takeoff has begun. Humanity is close to building digital superintelligence (super AGI), and at least so far, it's far less strange than it seems it should be.
Event Horizon: A physics term. In general relativity, a black hole's "event horizon" is an invisible boundary that cuts spacetime into two parts: once any particle or information crosses this line, it can never return, no matter how powerful or fast, not even light can escape.
Robots are not yet walking the streets, nor are most of us talking to AI all day. People still die of disease, we still can’t easily go to space, and there is a lot about the universe we don’t understand.
Robots are not yet on the streets, and most of us aren't talking to AI all day. People still die from diseases, we still can't easily go to space. There's still much about the universe we don't understand.
And yet, we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them. The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.
Despite this, we have recently built systems that are smarter than humans in many ways and can significantly amplify the output of people using them. The most unlikely part of the work is now behind us; the scientific insights that allowed us to develop systems like GPT-4 and o3 were hard-won, but these insights will guide us far into the future.
AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present. Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have.
AI will contribute to the world in many ways, but the improvements to quality of life from AI accelerating scientific progress and increasing productivity will be enormous; the future can be vastly better than the present. Scientific progress is the most powerful driver of overall progress; it is incredibly exciting to think about how much more we could achieve.
In some big sense, ChatGPT is already more powerful than any human who has ever lived. Hundreds of millions of people rely on it every day and for increasingly important tasks; a small new capability can create a hugely positive impact; a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact.
In a macroscopic sense, ChatGPT is already more powerful than any human who has ever lived. Hundreds of millions of people rely on it daily for increasingly important tasks; a minor new capability can bring immense positive impact; whereas a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact.
Misalignment: A technical term. For example, the essence of algorithmic short videos is a misaligned AI. The purpose of this AI is not aligned with your value as a user.
2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.
In 2025, agents capable of performing real cognitive work have already appeared; the way computer code is written will never be the same. 2026 will likely see the arrival of systems that can discover novel insights. 2027 may see the arrival of robots that can perform tasks in the real world.
Simply put: 2025, L3 AGI; 2026, L4 AGI;
A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools. Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change, and one many people will figure out how to benefit from.
Many more people will be able to create software and art. But the world needs more of both, and experts will likely still be much better than novices, as long as they embrace these new tools. Generally speaking, the ability for one person to accomplish far more in 2030 than in 2020 will be a striking change, and many people will find ways to benefit from it.
In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes.
In the most important aspects, the 2030s may not be drastically different from now. People will still love their families, express creativity, play games, and swim in lakes.
But in still-very-important-ways, the 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out.
However, in some still very important ways, the 2030s are likely to be profoundly different from any era before. We do not know how far beyond human-level intelligence we can go, but we are about to find out.
In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else.
By the 2030s, intelligence and energy—that is, ideas, and the ability to bring ideas to fruition—will become extraordinarily abundant. These two have long been the fundamental limiting factors on human progress; with ample intelligence and energy (and good governance), we can theoretically achieve anything else.
Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make life-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.
In reality, we already live with astonishing digital intelligence. After the initial shock, most of us have become quite accustomed to it. We quickly move from being amazed that AI can generate a beautifully-written paragraph to wondering when it can write a beautiful novel; or from being amazed that it can make life-saving medical diagnoses to wondering when it can develop cures for diseases; or from being amazed that it can create a small computer program to wondering when it can create an entirely new company. This is how the singularity progresses: wonders become routine, and then "table stakes."
Table stakes: A poker term referring to the minimum bet each player must place on the table before the game begins. Extended to mean basic qualification, the minimum entry threshold.
We already hear from scientists that they are two or three times more productive than they were before AI. Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research. We may be able to discover new computing substrates, better algorithms, and who knows what else. If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.
Scientists report that their productivity has increased two to three times compared to before AI. Advanced AI is remarkable for many reasons, but perhaps nothing is as significant as the fact that we can use it to conduct AI research faster. We may discover new computing substrates, better algorithms, and who knows what else. If we can accomplish a decade's worth of research in a year or even a month, then the rate of progress will obviously be quite different.
Advanced AI: Similar to Ilya's "the best AIs" from yesterday. AI, like humans, are intelligent agents and should not be generalized; rather, they should be distinguished by their intelligence level.
From here on, the tools we have already built will help us find further scientific insights and aid us in creating better AI systems. Of course this isn’t the same thing as an AI system completely autonomously updating its own code, but nevertheless this is a larval version of recursive self-improvement.
From now on, the tools we have already built will help us gain more scientific insights and assist us in creating better AI systems. Of course, this is not the same as an AI system completely autonomously updating its own code, but it can nevertheless be seen as a nascent form of recursive self-improvement.
There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off.
There are also other self-reinforcing loops at work. The creation of economic value has launched a flywheel of compounding infrastructure buildout to run these increasingly powerful AI systems. And robots that can build other robots (and, in a sense, data centers that can build other data centers) are not far off from being realized.
If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.
If we have to manufacture the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can then build more chip fabrication facilities, data centers, and so on, then the rate of progress will obviously be quite different.
As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)
As data center production becomes automated, the cost of intelligence should eventually approach the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, roughly equivalent to an oven running for a little over one second, or a high-efficiency lightbulb running for a couple of minutes. Each query also consumes about 0.000085 gallons of water, approximately one-fifteenth of a teaspoon.)
The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big.
The rate of technological progress will continue to accelerate, and people will continue to be capable of adapting to almost anything. There will certainly be very difficult aspects, such as entire job categories disappearing; but on the other hand, the world will become incredibly rich, and very quickly, allowing us to seriously consider new policy ideas that were previously unimaginable. We probably won't adopt a new social contract all at once, but when we look back in a few decades, these gradual changes will have amounted to something enormous.
Based on my understanding, the only way out is UBI (Universal Basic Income), distributing money to everyone. It seems difficult to get everyone to quickly learn, engage in lifelong learning, and adapt to and master AI. Those unwilling or not expecting UBI will have to use AI daily and learn AI knowledge and tools.
If history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines.
If history is any guide, we will discover new things to do and new desires, and quickly assimilate new tools (job changes after the Industrial Revolution are a good recent example). Expectations will rise, but capabilities will increase just as quickly, and we will all get better things. We will build increasingly wonderful things for each other. Humans have a long-term, important, and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don't care much about machines.
A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries. I hope we will look at the jobs a thousand years in the future and think they are very fake jobs, and I have no doubt they will feel incredibly important and satisfying to the people doing them.
A subsistence farmer from a thousand years ago, seeing what many of us do today, might say our jobs are "fake jobs" and think we are just playing games to entertain ourselves because we have ample food and unimaginable luxuries. I hope that when we look at the jobs a thousand years in the future, we will also feel that those jobs are very "fake"; and I have no doubt that for the people doing them, those jobs will feel incredibly important and satisfying.
The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year. Many people will choose to live their lives in much the same way, but at least some people will probably decide to “plug in”.
The pace of new wonders being achieved will be immense. It's hard to even imagine today what we will have discovered by 2035; perhaps we will go from solving high-energy physics one year to beginning space colonization the next; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next. Many people will choose to live their lives in much the same way, but at least some people will probably decide to "plug in."
Looking forward, this sounds hard to wrap our heads around. But probably living through it will feel impressive but manageable. From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly. We are climbing the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it’s one smooth curve. (Think back to 2020, and what it would have sounded like to have something close to AGI by 2025, versus what the last 5 years have actually been like.)
Looking ahead, this sounds hard to fully grasp. But experiencing it will likely feel impressive yet manageable. From a relativistic perspective, the singularity arrives little by little, and the merge happens slowly. We are climbing the long arc of exponential technological progress; this curve always appears steep when looking forward and flat when looking backward, but it is a single smooth curve. (Think back to 2020, and how outlandish it would have sounded to have something close to AGI by 2025; then consider what the past 5 years have actually been like.)
There are serious challenges to confront along with the huge upsides. We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications. The best path forward might be something like:
The immense potential benefits are accompanied by severe challenges. We do need to solve the safety issues, both technically and societally, but then it's critically important to widely distribute access to superintelligence given its economic implications. The best path forward might be something like:
1. Solve the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term (social media feeds are an example of misaligned AI; the algorithms that power those are incredible at getting you to keep scrolling and clearly understand your short-term preferences, but they do so by exploiting something in your brain that overrides your long-term preference).
1. Solve the alignment problem, meaning that we can robustly guarantee that AI systems learn and act towards what we collectively truly want in the long term. (Social media feeds are an example of "misaligned AI"; the algorithms driving these feeds are extremely adept at tempting you to scroll endlessly and clearly understand your short-term preferences, but they achieve this by exploiting certain mechanisms in your brain that allow your short-term preferences to override your long-term ones.)
2. Then focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness the collective will and wisdom of people, then although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly and be able to use this technology to get maximum upside and minimal downside. Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better.
2. Next, focus on making superintelligence inexpensive, widely accessible, and not overly concentrated with any single person, company, or country. Society is resilient, creative, and highly adaptable. If we can harness the collective will and wisdom of humanity, then even though we will make many mistakes and some things will go seriously wrong, we will learn and adapt quickly, enabling us to use this technology for maximum benefit and minimal harm. Granting users a great deal of freedom, within broad boundaries that society must decide upon, seems very important. The sooner the world can begin a conversation about what these broad boundaries are and how we define "collective alignment," the better.
We (the whole industry, not just OpenAI) are building a brain for the world. It will be extremely personalized and easy for everyone to use; we will be limited by good ideas. For a long time, technical people in the startup industry have made fun of “the idea guys”; people who had an idea and were looking for a team to build it. It now looks to me like they are about to have their day in the sun.
We (the entire industry, not just OpenAI) are building a "brain" for the world. This "brain" will be highly personalized and easy for everyone to use; our only limitation in the future will be the availability of good ideas. For a long time, technical individuals in the startup community have mocked "idea guys"—those who had only an idea and sought a team to realize it. Now, it seems to me, their day in the sun is about to arrive.
OpenAI is a lot of things now, but before anything else, we are a superintelligence research company. We have a lot of work in front of us, but most of the path in front of us is now lit, and the dark areas are receding fast. We feel extraordinarily grateful to get to do what we do.
OpenAI is many things now, but first and foremost, we are a superintelligence research company. We have a lot of work ahead of us, but most of the path before us is now illuminated, and the dark, unknown areas are rapidly receding. We feel incredibly grateful to be able to do what we do.
Intelligence too cheap to meter is well within grasp. This may sound crazy to say, but if we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.
"Intelligence too cheap to meter" is well within reach. This might sound insane, but if we had told you in 2020 where we would be today, it probably would have sounded crazier than our current predictions for 2030.
As this article was published, OpenAI lowered the price of o3 to be even cheaper than GPT-4o. Top-tier AI at rock-bottom prices.
May we scale smoothly, exponentially and uneventfully through superintelligence.
May we scale smoothly, exponentially, and uneventfully through the superintelligence phase.
To translate: May the world be peaceful. The singularity is rapidly approaching, and hopefully, it will be a "gentle singularity."
Miscellaneous Thoughts
A few thoughts related to the article.
First, let me reveal an "open secret": for English translation, the best tool is OpenAI Deep Research. It's a one-shot process, accurate in expression, appropriate in language, with almost no room for revision (taking this article as an example, I only added a few annotations). Crucially, it won't omit, randomly alter, or make mistakes.
Many years ago, I made a living from translation (2/3 of the Bloomberg Businessweek special issue when Steve Jobs passed away was translated by me). The fees were very high; I could earn 50k RMB in a single National Day holiday. However, now using the Deep Research translation workflow, I can only sigh in relief that I changed careers.
This experience is very valuable. Essentially, human translation and low-quality AI translation should no longer exist. You can just treat Deep Research as a translation agent that can produce output directly.
Second, let me use a specific example to prove a terrifying point.
Point: In the AI era, intelligence no longer possesses "uniqueness" (previously, advanced intelligence was exclusive to "Homo sapiens") but has become a resource, just like electricity, nothing special, and extremely cheap.
How cheap? The cost of intelligence will eventually approach the price of electricity, so cheap it barely needs to be metered. Sam Altman provided the current average energy consumption for one ChatGPT conversation: 0.34 Wh electricity + 0.000085 gal water.
Example: Taking Sam Altman's latest blog as an example, 15 years ago, translating such an article would cost around 1000 RMB (400 RMB per thousand characters). Now, a quick Deep Research takes 10 minutes. Faster, better, and virtually free. (So, why do universities still have English majors?)
Many types of intellectual labor are not fundamentally different from translation labor.
Proof complete.