On October 27th and 28th, the Symposium for AI Accelerated Science (AIAS 2025) was held in San Francisco, USA. The conference gathered nearly 30 top global scholars and industry leaders, along with hundreds of scholars and students, to discuss how AI can drive scientific discovery.
During the symposium, Chen Tianqiao, founder of Shanda Group and Tianqiao and Chrissy Chen Institute (TCCI), delivered a keynote speech, systematically expounding for the first time the new concept of "Discoverative Intelligence," pointing out that it represents true Artificial General Intelligence (AGI) and proposing a path to achieve it.
▷ Chen Tianqiao delivers keynote speech at AIAS
The following is the full text of Chen Tianqiao's speech, "True Intelligence is Discoverative Intelligence":
Human Evolution Has Never Stopped, It Has Only Changed Its Form
Since the emergence of Homo sapiens, our bodies have barely changed. Some studies even suggest that the volume of the human brain has shrunk compared to the Paleolithic era. But this does not mean that human evolution has stopped.
We use our wisdom to make scientific discoveries and technological inventions our new, external evolutionary organs. We invented weapons to gain claws and fangs, clothes to gain new skin, cars to run faster than cheetahs, and airplanes to fly higher than birds. Our average lifespan has extended from our twenties to nearly eighty, a gap that biologically exists only between different species.
It can be said that humanity has not stopped evolving; on the contrary, by continuously discovering the unknown, we have externalized our functions, expanding our reach in time and space. Scientific discovery and technological invention have become the main engines of human evolution.
“Discoverative Intelligence” is True Artificial General Intelligence
Therefore, AI for Science should not be seen as just one direction for AI applications. It defines the relationship between AI and humanity: the value of AI lies not in replacing existing human jobs, such as being faster, cheaper, or more efficient. From the perspective of our species' evolution, AI for Science is AI for Human Evolution. Helping humanity discover the unknown is AI's ultimate value to humankind.
Many current models claim to have "discovered" new structures, new molecules, or even new theories. However, most of these "discoveries" remain at the outcome level. They find new samples within known energy functions, statistical patterns, or corpus distributions. This is not scientific discovery but extrapolation within a search space.
True "discovery" means being able to ask questions, not just answer them; to understand principles, not just predict results.
This intelligence, which can actively construct testable theoretical models (testable world models), propose falsifiable hypotheses, and continuously revise its cognitive framework through interaction with the world and self-reflection, is true Artificial General Intelligence. We call it "Discoverative Intelligence."
It differs from other definitions of intelligence:
It transcends imitation, because creation and discovery are the essence of wisdom;
It is falsifiable, because discovery is an observable event, not a vague philosophical definition like "consciousness";
It redefines the meaning of AGI – not "replacing humanity," but "evolving humanity."
Scale Path vs. Structural Path: Two Roads to "Discoverative Intelligence"
Using "Discoverative Intelligence" as the new standard, we re-examine the two main schools of AI development today:
The first is the "Scale Path." It emphasizes that parameters are knowledge, and intelligence is a product of scale. As long as models are large enough, data is abundant enough, and computing power is strong enough, intelligence will naturally emerge. This path has achieved astonishing application results, enabling AI to predict proteins, generate compounds, and even assist in scientific research. This is undoubtedly the most successful engineering path in AI history.
Meanwhile, another path is quietly forming: the "Structural Path." Here, "structure" does not refer to model architecture but to the "cognitive anatomy" of intelligence. The brain is a system that, through neurodynamics and based on memory, causality, and motivation, forms a knowledge system and continuously evolves over time. These mechanisms endow intelligence with continuity, interpretability, and direction. The essence of scientific discovery is to infer the future, and this view holds that only intelligence with temporal structure can remain effective outside the distribution.
The Mirror of the Brain: Temporal Structure Analysis
So, what exactly does the "temporal structure of the brain" refer to?
It does not refer to a specific physical region of the brain, but rather the brain's fundamental "operating paradigm" for processing information.
The current AI "spatial structure" paradigm (scale path) is essentially "instantaneous" and "static," using a large number of spatial parameters to fit "snapshots" of the world. In contrast, the brain's "temporal structure" paradigm is essentially "continuous" and "dynamic," and its purpose is to manage and predict information in the flow of time.
To manage information in the flow of time, a system must possess five core capabilities, which together form a complete closed loop of "temporal structure":
(1) Neurodynamics
To "exist" in time, rather than performing "instantaneous computation," there must be a continuous energy basis. The brain is a continuously operating dynamic energy system; even without input, it can self-organize, self-activate, and self-correct, just as our brain is still working when we are daydreaming. This energy flow makes intelligence truly "alive." A Transformer, however, is a discrete, static computational graph; thinking completely stops after each inference, and the next one starts from scratch, lacking temporal continuity. Today's intelligence is merely computation, not existence. Wisdom must be "alive" because the world is constantly changing, and only systems that continuously update over time possess the ability for scientific discovery.
(2) Long-term Memory System
To "accumulate" past experiences, rather than starting from scratch every time, there must be a plastic storage mechanism. The memory of current large models is "short-term working memory"; once the context is cleared, intelligence is reset. Without long-term memory, there is no true learning. Long-term memory not only allows intelligence to accumulate experience but, more importantly, to learn to selectively forget, enabling efficient learning within limited parameters and the formation of hypotheses and theories.
(3) Causal Reasoning Mechanism
To understand the sequence of events in time (i.e., what caused what), it is necessary to deduce principles. Existing large models' understanding and reproduction of known information, including causal relationships, are still limited to linguistic statistics within known ranges, rather than mechanistic deduction. Models perform perfectly within the training data distribution but collapse when the environment changes because they rely on co-occurrence patterns, not world structure. The significance of causal reasoning in scientific discovery is precisely to rebuild an understanding of the world under unknown conditions, representing the first step toward going beyond the distribution and the starting point for world models.
(4) World Model
To predict future trajectories, it must be possible to simulate the world internally. While current AI possesses multimodal perception, it still lacks a unified model and cannot form a coherent "reality projection" internally. The human brain, however, has a unified world representation system that integrates perception, memory, prediction, and self-reflection. It allows us to simulate the world in our minds, pre-enact the future, and continuously run hypothesis testing and causal predictions at a neural level. This is the essence of scientific thinking: running experiments about the future in the brain.
(5) Metacognition and Intrinsic Motivation System
To manage the complex cross-temporal processes mentioned above. The human brain possesses metacognition, allowing it to be aware of its own uncertainties, adjust reasoning paths, allocate attention, and select strategies. This "thinking about thinking" is the starting point for science and creativity. Today's AI primarily relies on external instructions and lacks self-drive; even the reward functions in reinforcement learning are externally set. When long-term memory and causal reasoning converge in a world model, how can machine metacognition be generated, allowing the urge to explore and curiosity to arise spontaneously? This is a crucial step from passive executor to active explorer and the biggest challenge in moving towards living intelligence.
These five capabilities are not five parallel directions, but a continuous, active closed loop of intelligence – a system that can self-evolve over time. We call it the "Temporal Structure of the Brain."
Temporal Structure: An Entry Point for Young People
Precisely because the scale path has achieved tremendous success in recent years, we are now for the first time clearly seeing its ceiling: simply accumulating data and computing power cannot break through the barriers to true understanding and discovery. This is the best time for structuralist thinking to return. We stand at this historical turning point. What we need is not more graphics cards, but new theories, new algorithms, and new imagination. This requires interdisciplinary thinking: a fusion of neuroscience, information theory, physics, and cognitive psychology. This is precisely the advantage of young people.
We have computing power. Regardless of which path is chosen, computing power is indispensable. We will invest over a billion dollars to build dedicated computing clusters, providing young scientists with resources for immediate experimentation. This computing power is not for competing in scale, but for exploring structure, verifying memory mechanisms, new causal architectures, or new neurodynamic hypotheses.
We have offices. We have established R&D centers globally, inviting young researchers from various disciplines to brainstorm on whiteboards. Currently, over 200 PhDs from world-renowned universities work in our offices.
We are establishing benchmarks. We plan to launch new benchmarks that comprehensively measure neurodynamics, long-term memory, causal reasoning, world models, and metacognition, using whether AI "discovers" as the AGI measurement standard, allowing all scientists to collaborate and compete based on SOTA goals.
We have mechanisms specifically designed for young people. We are establishing PI incubators, opening independent research channels for young scientists globally. PhD students and postdocs do not need to wait until graduation to receive independent budgets, name their own labs on our platform, and lead colleagues in independently exploring the future structure of temporal intelligence.
We believe: scale is the path of giants, and temporal structure is the opportunity for young people. Giants push boundaries with computing power, while young people redefine intelligence with structure:
That is an intelligence that does not merely repeat existing knowledge, but can propose its own hypotheses, test the world, and revise its own understanding – this is "Discoverative Intelligence."
About Tianqiao and Chrissy Chen Institute
Tianqiao and Chrissy Chen Institute (TCCI) is one of the world's largest private brain science research organizations, founded by Chen Tianqiao and Chrissy Luo with a donation of $1 billion. It focuses on globalization, interdisciplinary collaboration, and young scientists to support brain science research for the benefit of humanity.
The Chen Institute has established the Frontier Laboratory for Applied Neurotechnology and the Frontier Laboratory for AI and Mental Health with Huashan Hospital and Shanghai Mental Health Center. It has also partnered with Caltech to establish the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.
The Chen Institute has built an ecosystem supporting research in brain science and AI, with projects spanning Europe, America, Asia, and Oceania, including academic conferences and exchanges, summer school training, the AI for Science Award, research-oriented clinician reward programs, special case communities, Chinese media inquiries, and popular science videos (like Da Yuan Jing).