Edited by Yifeng, Yunzhao
I wonder if you've noticed: Sam Altman has been unusually quiet this year.
Unlike in 2024, he hasn't been frequently appearing in various tech podcasts, conferences, and interviews. Even with several new OpenAI releases, he's been absent – this most active "CEO influencer" in the AI circle seems to have suddenly entered a "parental leave" mode, receding to the background.
Yet, at this critical juncture when AI products, agents, and large models are rapidly iterating, we particularly want to know what he's been thinking recently.
The good news is: he has finally made a public appearance!
At the recently concluded 2025 Snowflake Summit, Sam Altman, as a key guest, had a highly information-dense fireside chat with Snowflake CEO Sridhar Ramaswamy and Conviction founder Sarah Guo.
The brief twenty-minute interview was concise, informative, and packed with valuable insights!
For instance, his advice to AI entrepreneurs has changed; this year's theme is: "Act now."
For the first time, Altman expressed decisive and clear support for enterprise adoption of large models.
To all business owners and team leaders who are still waiting for GPT-5 model updates and taking a wait-and-see approach, Altman says: Instead of waiting for new models, start now.
"You'll find that businesses that are quick to commit and learn rapidly are already significantly ahead of their peers who are still observing," Altman stated.
Sridhar nodded in agreement: "There won't be a perfect moment waiting for everything to be ready."
In addition, Altman's second judgment is about agents. He believes: "The basic unit of future work is the AI Agent."
Altman describes it as being more like hiring an AI intern now: you give it a task, like "help me find missing SEO optimization points on our official website," and it reads your website code, checks search trends, scans your GitHub and Slack chat records, then gives you a draft optimization plan. You just need to click "agree" or "revise."
Altman says that this "intern" will soon become an engineer capable of independently managing projects. In other words, future work might not be "you completing 10 tasks," but rather "you directing 10 agents," and then fine-tuning their results.
Finally, he also revealed his vision for the "perfect model":
"Very small, but with superhuman reasoning capabilities, extremely fast, a context window of one trillion tokens, and access to every tool you can imagine."
👇Below is a screenshot from the event:
From left: Sam Altman, Sarah Guo, Sridhar Ramaswamy
Here is the transcribed interview. Enjoy:
Model Usability Has Taken a "Qualitative Leap," AI Entrepreneurs Must Act Now
Host Sarah: Let's get straight to it: Sam, what advice would you give to business leaders who are navigating the AI revolution?
Sam Altman: My advice is: act now. There's a lot of hesitation right now; models are changing so fast, everyone is thinking, "wait for the next version" or "see which of these two models is better," or "where will this trend ultimately go?"
But in technology, there's a general principle: when technology iterates quickly, the winners are often the companies that can rapidly experiment, reduce the cost of failure, and accelerate their learning speed.
What we've observed so far confirms this: businesses that commit early and experiment quickly perform significantly better than their peers who are observing and waiting.
Sridhar: I completely agree with Sam. I'd also add that "curiosity" is truly critical. Our reliance on many old processes is actually no longer valid, but many people haven't realized it. Nowadays, many platforms, like Snowflake, allow you to experiment at a very low cost, enabling many small tests from which you can extract value and continuously optimize.
I want to re-emphasize Sam's point: the faster you can iterate, the more you will benefit from AI. Because they know what works and what doesn't, and they can adapt to rapidly changing future situations.
In the next few years, there won't be a "perfect moment" for everything to settle. You just have to move fast in the chaos.
Host Sarah: So, how does your advice differ from last year?
Sridhar: Actually, I would have said the same thing last year. Especially "stay curious" and "allow for experimentation"—
These two points have always been important. The key is to experiment in scenarios where the "cost of failure is low," and there are actually many such scenarios.
However, technology is indeed maturing faster. For example, today's ChatGPT, already able to combine with web search to provide fresh information, is no longer a tool "disconnected from real-time data."
Whether it's structured or unstructured data, current chatbot technology is ready for mainstream use. Of course, we can still explore the boundaries of "agent" capabilities further, but even in applications far from the cutting edge, this technology is highly usable.
Sam Altman: Interestingly, my view last year might indeed have been different from now. If it was for startups, I would have encouraged them to get started early last year, but for large enterprises, I might have said: "You can do small-scale experiments, but in most cases, it's not yet suitable for production environments."
But this view has changed now—our large enterprise clients are growing incredibly fast in this area. They are truly using our technology at scale. I often ask them: "What has changed?" And they say: "Part of it is that we've figured out how to use it, but the bigger change is: this thing is just so much better now!"
It can do many things that were previously unimaginable. At some point, over the past year, the "usability" of models took a qualitative leap.
An even more interesting question is: what new insights will we share this time next year?
I predict that by then, we will enter a phase where you can not only use AI to automate business processes or develop new products but also truly say, "I have an extremely important business problem, and I am willing to throw a lot of compute at it to solve it."
And models will be able to accomplish tasks that previously couldn't be done even with teamwork.
Companies that have already started accumulating practical AI experience will have an advantage in future competition. By then, they can say, "Come on, AI system, completely refactor this critical project for me."
This is the prelude to the next qualitative leap: massive compute + AI reasoning capability + difficult problems. Whoever is ready will be able to take the next big step.
Codex Gave Me an AGI Feeling! Agents Will Solve Tricky Business Problems Next Year
Host Sarah: Since you mentioned reasoning capabilities, compute investment, and agents joining workflows, the issue of "memory and retrieval" cannot be avoided—what role do you think they will play in this round of AI transformation?
Sridhar: Retrieval technology has always been crucial for grounding generative AI, especially when real-world references are needed. For example, in the GPT-3 era, we built large-scale systems supporting web search, which could pull external information as references when you asked about current events.
Similarly, memory systems are also very important. Models being able to "remember" how you solved problems before, and your interaction history with the system, will greatly enhance their subsequent user experience and efficiency.
I believe that as models are used for increasingly complex tasks, the role of memory and retrieval will become even more critical. Whether it's improving interaction quality or enabling stronger agent behavior, the richer the context, the better the AI performs.
Host Sarah: Sam, can you give every leader here a framework to think about: what can agents do now? And what might they be able to do next year?
Sam Altman: Our recently released programming agent, Codex, gave me my personal first "AGI moment." You observe how it works—you give it a bunch of tasks, and it just silently executes them in the background. It's truly very smart and can complete those "long-cycle, multi-stage" tasks.
You just sit there and say, "This one passes," "That one doesn't," "Try again." It can even connect to your GitHub, and in the future, it might be able to watch your meetings, view your Slack chats, and read all your internal documents. What it's doing is already very impressive.
Perhaps right now it's just an "intern" that can work a few hours a day, but soon, it will be like a "senior engineer" who can work for several days straight. And these kinds of changes won't just happen in programming; we will see agents play similar roles in many types of work.
Many companies are already using agents to automate customer support, drive sales processes, and even more business areas. Some people are already describing their "job" as assigning tasks to a group of agents, evaluating output quality, analyzing how they collaborate, and providing feedback.
It sounds like managing a relatively young team. And this isn't just imagination—it is actually happening, it's just not yet fully widespread.
Next year, in some limited scenarios, even to a small extent, we will start to see agents truly helping humans discover new knowledge, or solve very complex business problems.
Current agents can mainly handle: repetitive cognitive tasks, short-cycle, low-level cognitive tasks. But as the tasks they handle become longer-term and more complex—at some point, we will usher in the moment when "AI scientists" emerge. A new type of agent that can autonomously discover science.
That will be a world-changing moment.
Host Sarah: You said that the experience with Codex and programming agents was your first "AGI feeling" moment. So I have to ask: how do you define AGI (Artificial General Intelligence) now? How far are we from it? And what does it mean for us?
Sam Altman: I think if you could go back in time, even just five years ago…
Host Sarah: That was almost the "dark ages" of AI.
Sam Altman: Actually, that period was also very interesting. If we go back exactly five years ago, I might not remember clearly, but it should have been on the eve of us launching GPT. At that time, the world hadn't seen truly powerful language models yet.
If you could go back to that point in time and show people today's ChatGPT, not even mentioning Codex or other products, just ChatGPT alone, I think most people would say: "Isn't this AGI?"
Humans are very good at "adjusting our expectations," which is actually a very beautiful aspect of human nature.
So, I think the question of "what exactly is AGI" is not important in itself. Everyone has a different definition for it, and the same person will give different definitions at different times.
What's truly important is: the speed of AI's leapfrog progress that we've seen over the past five years—it's very likely to continue for another five years, or even longer.
Whether you say the "victory point" of AGI will be in 2024, 2026, or 2028 is not that critical; whether you say the milestone of superintelligence is in 2028, 2030, or 2032 doesn't matter either.
The point is: this is a long and beautiful, astonishingly smooth exponential curve.
For me, a system that can autonomously discover new science, or a tool system that multiplies the speed of scientific discovery worldwide, already meets all my criteria for AGI.
Of course, some people insist that AGI must be self-improving; others feel that a version like ChatGPT with memory functions is already very much like AGI.
Host Sarah: Indeed, from some early tests, such as the Turing Test, ChatGPT has already met the criteria.
Now, let's turn to Sridhar, do you remember when you first used an OpenAI model for search?
Sridhar: You were actually using the GPT-3 Playground at the time, doing some small experiments. We later also connected to the API, but back then, we weren't allowed to use the full GPT-3 model.
We then worked backward: how to achieve similar results using a 7-billion or 10-billion parameter model.
For me, the first "aha!" moment was seeing GPT truly solve a difficult problem: abstractive summarization.
That is: compressing a 100-word blog post into three sentences to describe it. This task is very difficult, even for humans. But these models suddenly became able to do it.
At that moment, I realized that if it could do this kind of thing across an entire web corpus—combined with search engine capabilities that can determine which pages are worth looking at—that would be a new era for search engines.
I remember thinking to myself: Wow, this thing has real power. And its performance only got better afterward.
Host Sarah: In your journey as an entrepreneur or CEO, was there a point where you suddenly realized, "Wow, now everything is about search, or rather, 'search+'"? I myself hired former Neva employees, and their philosophy at the time was also: everything in this era is about search. When did you have this thought?
Sridhar: This question is actually about "setting context"—when you start using these models, or thinking about a problem, you realize: we need a mechanism to narrow the scope of vision, to make the model focus on the content you want to process.
This is a very powerful and generalizable technique. You see many fine-tuning and post-training techniques now, and the underlying logic is similar: take a very powerful model, provide it with context, tell it which information is relevant and which is irrelevant, and then use this method to improve the model's output quality.
I think this is more like a general way of thinking, rather than just a specific tool. If you want to achieve a certain result, the key is to set the "context" well.
Context is infinite, and humans solve this problem with an "attention" mechanism—we focus on a point. I see search as a tool for setting the model's attention focus.
Host Sarah: Do you agree with Sam's view? That is, that we are on an "exponential growth curve of capability"? Or do you have your own definition of AGI that you agree with—a standard that is more important to you or your clients?
Sridhar: I think this will become a very philosophical debate. For example, there's an analogy: "Does a submarine really swim?" In a sense, it sounds a bit absurd, but from another perspective, it certainly does "swim."
So I also see these models as possessing incredibly amazing capabilities. Anyone who follows future trends and sees the performance of these models might say: "This is already AGI."
But as Sam mentioned, what we're saying now might seem trivial by 2025.
What truly amazes me is the speed of progress. I sincerely believe that this process will bring many great achievements.
It's a bit like, how do we view a "decent computer" being able to defeat all the world's chess masters—is that really important?
Not important. Many of us still play chess, and they are still very good at it.
So I don't think the debate about "definition" is that crucial. Go is also more popular now than before. We will learn a lot from this path, but "that specific moment" isn't the point.
Perfect Model: Lightweight, Strong Reasoning, Capable of Calling All Tools
Host Sarah: I have an intuition: when people ask about AGI, many of them are actually asking about "consciousness," but they haven't articulated the question clearly, or only a portion of people explicitly ask that. You previously said this is more philosophical, so I want to ask you: you're already training the next generation of models internally, seeing capabilities others don't yet see. From a product and company operations perspective, what new "emergent capabilities" are changing your way of thinking?
Sam Altman: Yes, the models released in the next one or two years will be amazing. We still have a lot of room for improvement ahead of us.
Just like the leap from GPT-3 to GPT-4, many businesses will be able to do things that were previously impossible. For example, as we just discussed, if you're a chip company, you could say: "Help me design a better chip than our existing solution"; or if you're a biotech company, you could say: "I can't solve this disease, you solve it."
These are no longer out of reach.
These models have the ability to understand all the context you can provide them, connect to all tools and systems, then think deeply, perform incredibly well in reasoning, and provide convincing solutions.
Their robustness is also improving; we can increasingly confidently let them autonomously execute complex tasks.
Frankly, I never thought they would come this fast. But now it really feels… very close.
Host Sarah: Can you give everyone a sense of intuition: what "knowledge" will AI be able to master in the future? And what's still on the boundary? My imagined "core intellect" is, I consider myself quite smart, but I don't have a perfect physical simulator in my brain—so how do we judge how much further AI can evolve?
Sam Altman: A thinking framework I personally like is this: this isn't something we're about to release, but conceptually, what we're pursuing is a model that is very small, but has superhuman reasoning capabilities, runs extremely fast, has a context window of one trillion tokens, and can access every tool you can imagine.
So whether it "knows a specific piece of knowledge" becomes less important.
Using these models as databases is absurd—they are slow, expensive, and inaccurate databases. But what's amazing is: they can reason.
You can "throw in" all the contextual information of a business or personal life, and integrate the necessary physical simulators or other tools—what you can do becomes truly remarkable.
And we are now moving in this direction.
Host Sarah: That's fascinating. I want to ask a more hypothetical question:
If you had 1000 times the compute power you have now—originally I wanted to ask for "infinite compute," but that's too exaggerated—if it were 1000 times, what would you do with it?
Sam Altman: I think the most "meta" answer (though I'll give a more practical one later) might be this: I would ask you all to commit all efforts to advance AI research, develop even better models, and then ask that stronger model how we should best utilize this compute power.
Host Sarah: Directly ask it to solve your hardest problem.
Sam Altman: I think that's actually the most rational approach.
Host Sarah: This shows you truly believe it can provide answers.
Sam Altman: I think a more practical answer is this: We've already seen many cases internally at ChatGPT and among enterprise users demonstrating that using more compute during testing can indeed bring real benefits.
For example, if you let the model "think a bit longer," or try a complex problem several times, you might get significantly better answers.
Of course, you wouldn't actually do that, and you don't have 1000 times the compute. But the fact that this capability is now becoming feasible indicates that one thing we can try is:
To view the value of compute in terms of the "power law"—for the hardest, most valuable problems, being willing to invest more compute to try might lead to breakthroughs.
Host Sarah: Sridhar, would you do the same for Snowflake? You're an expert in data infrastructure, search optimization, and enterprise systems, and now you lead Snowflake. If you were given a super difficult problem, would you also directly throw it at compute?
Sridhar: I think that's certainly a very cool application scenario. But let me answer from a different perspective, stepping outside the tech circle we live in every day:
Are you aware of a research project called Project Arnold? It's somewhat similar to the DNA sequencing project we undertook over 20 years ago, but this time the research subject is RNA expression mechanisms. It turns out that RNA actually controls how proteins work in our bodies.
If we could completely understand how RNA regulates DNA expression, then it's very likely we could overcome a large number of diseases, which would be a huge leap for human society as a whole.
So, using language-like models to do these kinds of RNA research projects, just like using supercomputing to crack the human genome back then—that would be a very cool direction for application, if you could really mobilize a lot of compute.
Host Sarah: That's truly inspiring, and it's indeed one of the biggest problems humanity faces.
Thank you (for participating in the interview).
Thursday 8 PM, let's unveil together ↑↑↑
Current state of AI entrepreneurs:
Restarting life?! Not just intensely competitive, but even crazier
—Recommended Articles—
Breaking! Just acquired by OpenAI, immediately throttled by Claude! Windsurf CEO is furious: Can't buy even if I want to! Netizens: Just use GPT-5!
Colluding with OpenAI for data monopoly, squeezing out Anthropic? Reddit tears face: Pay up! Netizens: The Internet isn't dead yet! AGI is too fast, this lawsuit will be meaningless by the time it's over