What Is Consciousness? Life, Intelligence, and Everything from the Perspective of Wolfram's Computational Universe

Stephen Wolfram is a scientific polymath of our time, who a few years ago launched his 'Wolfram Physics Project' which aims to unify all of physics using computation. In this project, he has also published his profound insights on the nature of intelligence, especially consciousness: in the universe's hypergraph network, from a computational perspective, any sufficiently complex system possesses intelligence, while consciousness is essentially a degradation of intelligence—a 'coherent thread experience' formed under computational constraints through first-person sequential causal integration. So-called physical laws are 'pockets of reducibility' formed by averaging due to the observer's insufficient computational power: statistical mechanics is an average of a group of particles, relativity is an average of space, and quantum mechanics is an average of the universe's branching. This means that the essence of physical laws is also a construction of consciousness.

Keywords: Life, Consciousness, Intelligence, Complex Systems, Computation

Stephen Wolfram | Author

Shisanwei | Translator

Image

Copyright Notice

Reprinted from Jizhi Club, copyright belongs to the original author, used for academic sharing, if there is infringement, leave a message to delete

Image

Stephen Wolfram is a scientific polymath of our time, who a few years ago launched his 'Wolfram Physics Project' which aims to unify all of physics using computation. In this project, he has also published his profound insights on the nature of intelligence, especially consciousness: in the universe's hypergraph network, from a computational perspective, any sufficiently complex system possesses intelligence, while consciousness is essentially a degradation of intelligence—a 'coherent thread experience' formed under computational constraints through first-person sequential causal integration. So-called physical laws are 'pockets of reducibility' formed by averaging due to the observer's insufficient computational power: statistical mechanics is an average of a group of particles, relativity is an average of space, and quantum mechanics is an average of the universe's branching. This means that the essence of physical laws is also a construction of consciousness.

Keywords: Life, Consciousness, Intelligence, Complex Systems, Computation

Image

Stephen Wolfram | Author

Shisanwei | Translator

Image

Article Title: What Is Consciousness? Some New Perspectives from Our Physics Project
Article Link: https://writings.stephenwolfram.com/2021/03/what-is-consciousness-some-new-perspectives-from-our-physics-project/

I. How Do We Talk About Consciousness?

For years, whenever I talked about my discoveries in the computational universe, about computational irreducibility, and about my Principle of Computational Equivalence, people would ask, "So what does that mean for consciousness?" And I would usually say, "That's a tricky subject." And then I'd start talking about the relationship between life, intelligence, and consciousness.

I would mention, "What is the abstract definition of life?" We know the manifestations of life on Earth—RNA, proteins, and other biochemical details. But how do we generalize that to a more general level? What constitutes "life" in a universal sense? I would argue that it is essentially a manifestation of "computational sophistication" (compared to complexity, this is a more active and precise complexity), and according to the Principle of Computational Equivalence, this computational sophistication is ubiquitous. Then I would move on to "intelligence." I would argue that it's pretty much the same: we are familiar with "human intelligence," but if we abstract it, we will find that it is nothing more than a manifestation of some computational sophistication—and it is also everywhere. So it's quite reasonable to say that "the weather has its own 'thoughts'"; it's just that the details and "purpose" of those "thoughts" do not overlap with our human experience.

Before this, I had always defaulted to the idea that understanding "consciousness" could follow the same pattern as "life" or "intelligence": if we think in a sufficiently abstract way, then "consciousness" is just a characteristic of computational sophistication, and therefore ubiquitous in the universe. However, based on our research in the Physics Project, especially the impact of the thinking prompted by the foundations of quantum mechanics, I began to realize that the core of consciousness is actually quite different from the previous line of thought. Yes, its implementation certainly requires computational ingenuity and complexity. But its essence is not just about "what can happen," but more about how we integrate and unify everything that is happening to form a certain coherence, thereby allowing us to have "definite ideas" or "clear thoughts."

In other words, rather than consciousness being a "transcendence" of generalized intelligence or general computational sophistication, it is rather a kind of "step down"—that is, a simplified description of the universe based on using only finite (bounded) computational resources, thereby achieving the "overall consistent" state that we perceive. Initially, it was not obvious whether this definition of consciousness based on finite computation could exist self-consistently in our universe. And in fact, its existence seems to be closely related to some deep characteristics in the formal system that underpins physics.

Ultimately, we will find that many things in the universe are in some sense "beyond consciousness." But the core concept of "consciousness" is crucial to how we view and describe the universe—on a very fundamental level, it makes the universe we are in appear to us to have those laws and behaviors.

Discussions about consciousness have never ceased for hundreds of years. What surprises me is that with our exploration of the computational universe, especially with the new results from the Physics Project recently, there seems to be an opportunity to bring completely new perspectives, and these new perspectives are expected to connect the questions about consciousness with more concrete and formal scientific ideas.

Undeniably, there will be very complex conceptual issues surrounding consciousness and its connection to our new physics foundation. Here I can only try to outline some preliminary ideas. Of course, much of what I say may resonate with existing philosophical or other disciplinary views, but at present, I have not had time to delve into these historical threads, so this is merely an exposition of the ideas themselves.

Image

(The causal hypergraph network of the universe. Life, intelligence, and consciousness from the perspective of Wolfram's computational universe)

  • The Principle of Computational Equivalence: Any system that appears relatively complex has the same complexity and has reached the limit of complexity—for example, fluids, social systems, ant colonies, etc. The complexity of other extremely complex systems in the universe is the same as, for example, the complexity of the brain.
  • Computational irreducibility: A characteristic of a computational process that makes the final result unpredictable by simpler computational steps.
  • Causal invariance: The sequence of events in a cosmic system can be different, but all possible sequences of events ultimately produce equivalent histories, so that the overall structure of the entire causal relationship network remains unchanged.

II. The Observer and the Physics Observed

In our model, the universe at the very bottom is full of intricate computation. The lowest level is just a collection of "atoms of space," whose interconnections are constantly updated according to some computational rule. This process is inevitably characterized by "computational irreducibility," meaning that, to "predict what will happen next," one generally has to compute step by step just like the system itself, and cannot find an easy formula or shortcut.

But if that's the case, why doesn't the universe we perceive appear to be everywhere filled with complex and unpredictable chaos? Why do we still see order and regularity within it? Indeed, there is a great deal of computational irreducibility, but we can still find and exploit some reducible localized areas, thereby allowing us to describe the world in a simpler way, and thus successfully and coherently apply it. A fundamental discovery of our physics project is that: the two pillars of 20th-century physics—general relativity and quantum mechanics—correspond precisely to such two "pockets of reducibility."

There is an analogy for this—and it turns out that this analogy is an example of the same problem we are discussing about basic computational phenomena. Think about a gas, like air. Ultimately, a gas is countless molecules colliding and flowing, exhibiting various complex and irreducible computational processes. But there is a key fact in statistical mechanics: if we only look at it from a macroscopic perspective, we can use properties like temperature and pressure to describe the gas in a useful way. This suggests a "pocket of reducibility": we can grasp the overall properties of the gas without having to concern ourselves with the irreducible complex motion of the microscopic molecules.

How to understand this phenomenon? One idea can be generalized: imagine our observation of the gas as—we are "merging" the microscopic states of many molecules and only focusing on the overall aggregate properties. The technical term in statistical mechanics is "coarse graining." But in our computational framework, this can be clearly characterized computationally: at the molecular level, irreducible computation is taking place; and when an observer wants to "understand what is happening to the gas," the observer themselves is also performing computation. The key point is: if the observer's computational ability is limited, specific consequences will appear in the observation results. In the case of gas, this consequence directly leads to the manifestation of the second law of thermodynamics.

In the past, people were often puzzled by the origin and validity of the second law of thermodynamics. But from the new perspective, we find that the second law is actually the result of the joint action of "underlying computational irreducibility" and "limited computational ability of the observer." If an observer could precisely track the irreducible motion of every single molecule, they would not observe the second law. What the second law relies on is that when an observer gets a simpler overall view due to some reducibility, this causes entropy to always tend to increase (in other words, a macroscopic unidirectional evolution can be observed from it).

Now let's look at physical space. Traditional views often regard space as a mathematical object that can be "described as a whole." But in our physics model, space is actually composed of a huge number of discrete units; the way these units are connected evolves according to some complex, irreducible computational rule. But just like gas molecules, if an observer wants to obtain a coherent view of the world, and that observer can only use limited computational resources, then a series of constraints will be placed on the properties of space obtained from the observation. And from our research results, these constraints will naturally lead to the set of conclusions of "relativity."

That is to say, for the most fundamental "atoms of space," general relativity is the inevitable product of the "game" between "computational irreducibility" and "the observer's desire to form a coherent perspective on the universe."

We can add a little technical detail. In our underlying theory, every basic element of space evolves according to certain computational rules, and these rules generate irreducible dynamic behavior. But if we only see these irreducible parts, the universe seems fragmented, and every part behaves unpredictably.

However, imagine an observer who can grasp some overall coherence from this chaos and regard it as a definite overall "space." What are the characteristics of this observer? First, since this theory attempts to describe the entire universe, the observer is naturally also included in this whole; the observer is also an "embedded" part composed of these space atoms, evolving according to the same rules.

This immediately leads to a conclusion: from "inside" the system, an observer can only observe certain aspects of the system. For example, suppose in the entire universe, an "update" only happens at one location at any given moment, like a Turing machine, and this single "update point" "runs" back and forth across the universe, sometimes updating the observer's brain, sometimes updating other parts of the universe. By carefully tracking this scenario, we can find that: for an observer "inside the system," what they can perceive are only the causal relationships between events. They have no way to determine "exactly when a certain event happened," they can only know "which event preceded which event," that is, the causal order of events. This understanding is how the unavoidable beginning of relativity in our model arises.

But to truly arrive at conclusions equivalent to the form of general relativity, two other points are needed. First, if an observer wants to combine the large number of space atoms into "a coherent picture of space," they cannot track each atom separately. They must assign a set of coordinates to these atoms, or in the language of relativity, define a "reference frame", viewing many space points merged together. Second, if the observer's computational resources are limited, this will impose constraints on the structure of this reference frame: it cannot be detailed enough to capture the irreducible behavior of every space atom.

So suppose an observer successfully defines some reference frame, how can they ensure that that reference frame remains consistent in their description as the universe evolves? The key is that we believe (or require) that the physical world satisfies a property at the underlying level that we call "causal invariance." The rules describe how the relationships between space atoms are updated step by step, and "causal invariance" requires that: no matter what order we choose to apply these update operations, the resulting causal relationship structure is the same.

It is this point that allows different observers to use different reference frames, but still perceive the same consistent dynamics of the universe. Ultimately, we arrive at a very clear conclusion: if there is computational irreducibility at the underlying level, and the rules have causal invariance, then any observer who expects to form a coherent description with limited computational ability will inevitably perceive a universe that satisfies general relativity.

However, similar to the second law of thermodynamics mentioned earlier, this conclusion relies on the fact that we indeed have an observer who "forms a coherent perception." If the observer could truly track the irreducible activity of each space atom one by one, they would not see "general relativity"; general relativity only naturally emerges when the observer attempts to simplify and grasp the universe as a whole.

III. The Quantum Observer

So how does quantum mechanics relate to the observer? Surprisingly, quantum mechanics has striking similarities to the "second law of thermodynamics" and "general relativity": it also arises from the observer's desire to form some coherent view of the universe, while the universe's underlying level is full of irreducible complexity.

In classical physics, we usually assume that everything happening in the universe unfolds along a single thread of history; but the key point of quantum mechanics is that: in fact, there is more than one thread of history in the universe—we must consider multiple histories unfolding in parallel simultaneously. And in our model, this is almost inevitable.

In the underlying rules, what is described is: how to locally apply rules on the hypergraph of space atoms, thereby updating the connections between them. But usually, there are many places in this hypergraph where these rules can be applied simultaneously. If we consider all possible ways of application, we can get a multiway graph, which contains all different possible histories, sometimes it will be branching, and sometimes it will be merging.

So how would an observer perceive these multiple parallel evolutions? The key is: the observer themselves is also part of this multiway system! That is to say, if the universe is branching, then the observer is also branching. Essentially, the question becomes: "How does a brain that is itself constantly branching perceive a universe that is constantly branching?"

Imagine a gas far larger than itself, where the observer can still aggregate and abstract some macroscopic properties; then in quantum mechanics, there is a similar phenomenon, except this time it's not merging molecules in physical space, but merging different historical branches in "branchial space."

More specifically, the states of all possible histories in the multiway graph, if sliced at a certain moment, can yield a set of nodes corresponding to all possible states of the system at that moment. And the structure of the multiway graph itself defines the relationships between these states (for example, through whether they have common ancestors, etc.). If we expand these states on a large scale, we can consider them distributed in some branchial space.

In the language of quantum mechanics, the geometric structure of branchial space actually describes the distribution of entanglement between different quantum states, and the coordinates in branchial space are similar to the phase of quantum amplitudes. If we consider the evolution of a quantum system: starting from a set of initial quantum states, we look at their trajectories in branchial space along the various history lines of the multiway graph.

But how would a "quantum observer" understand this process? Even if the observer was not initially branched, with the passage of time, they will inevitably spread out in branchial space, sampling a large area of "historical branches" simultaneously. If they try to track each branch independently one by one, they will face irreducible complex computation; there will be no coherent view at all. At this point, a new "observation method" needs to be introduced: merging those branches that are computationally nearby, thereby forming an overall, coherent description for themselves. Similar to the reference frame in general relativity, here we can call it a "quantum frame," which is a coherent expression of branchial space.

So, what would this coherent expression look like? The answer is precisely the set of quantum mechanics descriptions that physicists have developed over the past century. In other words, just as general relativity is our large-scale reduced description of physical space, quantum mechanics is our large-scale reduced description of branchial space—and these reductions all come from the need for an "embedded, computationally limited observer" to obtain a coherent picture.

So, did the observer "create" quantum mechanics? In a sense, yes: the multiway graph is full of irreducible complex processes. But if an observer wants to give a coherent description of the universe, they must merge or "coarse-grain" those branches. As soon as they do this, their description will strictly follow quantum mechanics. Indeed, other things are still happening in the universe—it's just that these kinds of things are not included in the scope of this coherent description.

Thus, if an observer selects a "quantum frame" and merges certain historical branches, they can obtain a coherent picture of the world. What would another observer see if they chose a different quantum frame? It has always been difficult to explain within the traditional formalism of quantum mechanics why different observers—despite using different observation methods—still agree on the fundamental conclusions about the universe's operation.

In our model, the answer is imminent: just as in the case of spacetime, as long as the underlying rules have causal invariance, no matter what quantum frame you choose, the observation results will inevitably be consistent and self-contained at a fundamental level. In other words, causal invariance ensures that quantum mechanics can maintain overall consistency when different observers make different measurements on the system.

There are many technical details to follow. The traditional formalism of quantum mechanics is divided into two major parts: one is the evolution of quantum states over time (i.e., the Schrödinger equation or other equivalent formulations), and the other is the "measurement" process. In our model, there is a very elegant correspondence: motion in space is structurally similar to the evolution of quantum amplitudes, both of which can be understood as the deflection of "geodesics" being caused by the presence of momentum and energy. However, in the classical case, this deflection (i.e., gravity) occurs in physical space; while in the quantum case, this correspondence (i.e., the phase change in the Feynman path integral) occurs in branchial space. In other words, the Feynman path integral is essentially the equivalent of the "Einstein field equations" in "branchial space."

What about quantum measurement? Performing a measurement means "reducing" the quantum superposition corresponding to many branches to a single branch, forming a coherent observation result. And a set of "quantum frames" precisely provides such a merging rule, specifying between which historical branches to merge. It is not a "physical entity" itself, but a way of describing the world.

We can understand it in another form: imagine that in order to study whether an observer can obtain a coherent description of the world, we select a quantum frame and merge branches according to its requirements. If we compare this multiway graph to a proposition inference process in a logical or formal system, then this "merging" operation is like performing some kind of "completion" of the inference branches, and each merger is equivalent to performing a measurement step. By doing all the necessary "completions," we can obtain the "Completion Interpretation" of quantum mechanics proposed by Jonathan Gorard.

As long as the underlying rules indeed satisfy causal invariance, we actually do not need to "really" perform these mergers at the physical level, because different historical branches will ultimately yield the same observable results for an observer inside the system. But if we want to take a "snapshot" of what the system is doing at this moment, we can artificially choose a quantum frame and perform the corresponding mergers. When observing from "outside" the entire system, these mergers do not seem to change the system; however, from the "subjective perspective of the observer," these mergers are quite "real," because they are the necessary way for the observer to understand the system.

In other words, in order for a brain that is "itself evolving multiway" to obtain a coherent picture of the world, it must make measurements and mergers under a certain quantum frame, cutting out a reducible slice from the irreducible underlying dynamics. And the result is—as we know—that it must obey quantum mechanics.

Thus, it is clear that for an observer with limited computational ability to form a coherent description of the universe, although the universe contains a lot of irreducibility, what they can ultimately perceive must satisfy the two core physical laws: general relativity and quantum mechanics.

However, it must also be pointed out that whether there exists an observer who can obtain coherent perception from the universe is not obvious at first. But now we know that if such an observer does exist, they will inevitably "extract" these two fundamental physical laws. If there is no observer capable of forming coherent perception, then the universe will not present the familiar, summaryable laws we know, and there will be no "physics" or science in general.

IV. So, What Is "Consciousness"?

What is so special about how we humans experience the world? In a sense, the mere fact that "we experience anything at all" is already unique. The external world evolves according to its own rules, containing a large amount of irreducible computation. But with our limited brainpower (or mental resources), we can piece together a coherent model of the world, and even generate "definite ideas about the universe" in our brains. Similarly, we can not only form coherent thoughts about the external world, but also form coherent thoughts about the computation happening within our own brains or minds.

However, what does "forming coherent thoughts" actually mean? We have long known that computation is universal at all levels, which is the insight given by the Principle of Computational Equivalence. But "forming coherent thoughts" seems to mean that—large-scale parallel computational activities are somehow "compressed" or "integrated" into an identifiable, linear "stream of thought."

Initially, this is not obvious from a biological perspective: we have billions of neurons working simultaneously, so why does the phenomenon of "I am thinking a clear thought" occur? But in fact, our brain has quite a special neural structure—a result of biological evolution—which strives to integrate various sensory information and internal processing, and finally form a dominant "thread of attention." Medically, the typical criterion for "loss of consciousness" is: although neurons are still active, the function of information integration and sequential processing is lost, and the person no longer exhibits normal "conscious" activity.

These are certainly biological details, but they point to a fundamental characteristic of consciousness: consciousness is not all the complex computations that the brain can perform; rather, it refers to the brain's unique mechanism that allows people to form a unified, linearized experience.

And now, our research on the Physics Project has made us realize that this practice of "obtaining linearized experience" has far-reaching implications beyond the scope of the brain and biology. Specifically, it defines "physics"—or rather, it defines the laws of "physics as we know it."

We often say that consciousness, like intelligence, is something we only have a clear understanding of in the specific case of "humans." But just as we can generalize "intelligence" to universal "computational sophistication," it now appears that we may also be able to generalize "consciousness" to a broader concept of "integrating and presenting computation in some way linearly."

On an operational level, there is now a potentially very intuitive perspective, provided we first understand the updated concept of "time." In traditional fundamental physics, time is often viewed as a dimension similar to space; but in our model, time and space have vastly different statuses: space is a hypergraph composed of what we call "space atoms" and their connections; while time corresponds to the "irreducible computational process that repeatedly updates these connections."

Yes, we find that these update events have clear causal relationships with each other (ultimately depicted by the multiway causal graph), but many events can be considered to be "happening in parallel," either in different regions of space or in different historical branches. And such parallelism is conceptually in conflict with "linearizing everything experienced."

However, as discussed earlier, the physical formalism (whether it's the reference frame in relativity or quantum mechanics) precisely merges all these parallel elements, making them appear linear in a time sequence.

In other words, we can say that we are arranging all events in a single-threaded update order, like a regular Turing machine, instead of having all cellular automata update in parallel at once, nor unfolding in parallel like a multiway (or non-deterministic) Turing machine. The universe itself might indeed be evolving in parallel, but our "analysis" and "experience" linearize it. And as we have seen before, whether doing so can maintain consistency is not necessarily guaranteed, but as long as there is the cooperation of reference frames, quantum frames, and causal invariance, it can ensure that no contradictory results occur, and we can also obtain the overall behavior that "satisfies general relativity and quantum mechanics at the macroscopic level."

Of course, we don't really "linearize" everything. Look at the simulation of brain work by artificial neural networks: sensory processing and other functions clearly have a large amount of parallel operations; but the closer we get to what we would call "thinking," the closer the processing is to sequential execution. Even our richest form of communication—language—is clearly linear and sequential.

When talking about consciousness, the so-called "self-consciousness" or "reflection on one's own thought processes" is often mentioned. Without a computational perspective, this seems mysterious; but from the perspective of "universal computation," it is almost inevitable. One of the greatest characteristics of a universal computer is that it can simulate any computational system—including itself. This is just like how we can write an interpreter for Wolfram Language in Wolfram Language itself.

The Principle of Computational Equivalence tells us that universal computation is ubiquitous, and the brain and mind, and even the entire universe, have universal computational capabilities. Yes, doing self-emulation often takes more time than the original system, but the important thing is that it can indeed do it.

Imagine this: when the "mind" is thinking about the world, what it is doing is building a model of the world (and tending to model it sequentially). And when the mind thinks about itself, it will build another model. Our daily experience often begins with modeling the "external world," but then we continue to model on top of these models, layer by layer, and in the end, it may not be clear whether the information is "from within" or "from without."

Connecting "linearization" and "consciousness" can also help us understand how different individuals can have different "experiences." Essentially, this is like using different reference frames for spacetime, or different quantum frames for branchial space—as long as the system has causal invariance, different observers will ultimately form a consistent "objective reality." Without constant interaction in the universe, experiences would not align. But through these interactions, they will gradually move towards consistency, and it is this process that allows us to distill certain "physical laws," namely general relativity and quantum mechanics.

V. Other Consciences

The discussion about consciousness presented earlier is based on a "time-first" perspective: packaging the various parallel dynamics scattered in space—and branchial space—into a sequential experience. And clearly, our human body and sensory mechanisms are particularly suited for this: we are at an advantageous intermediate position on physical scales and constants for a "sequential experience," neither as extremely scattered as in quantum effects, nor as large as in general relativity leading to gravitational collapse.

For example: Why can we "ignore" the influence of space and talk about events happening at different locations as if they are happening simultaneously at the same moment? The fundamental reason is that the speed of light is very large relative to our scale. The visual range we usually focus on might be tens of meters away, but light only takes tens of nanoseconds to travel there; while our neurons take milliseconds to process visual information. In other words, for brain experience, we can treat things happening in the air almost as "simultaneous" and merge them into a "moment." This is why the world picture in our brain presents itself as a linear "present-future" sequence.

But what if we were as big as a planet? If the brain still processed information on a millisecond level, signals distributed across various locations would take longer to gather, making it difficult to piece together a single "what happened—what will happen next" sequence for our experience.

Even as humans, similar situations exist: for example, using smell to perceive the world (dogs rely more on smell). The propagation speed of odor molecules is much slower than the speed of light, making it difficult to form that kind of "instantaneous integration" of space. Relying solely on smell would produce a completely different model of the world, and perhaps even require defining very complex "gauge fields" or similar concepts for those odor flow paths to describe their various meanders and twists.

If we further imagine a brain larger than a planet, the delay issue becomes even more severe. It would be almost impossible for it to complete the integration of the entire brain state within milliseconds. Perhaps "from the outside," it would be impossible to form a consistent experience. But "from the inside," perhaps such a super-brain would simply assume it has a unified experience, thereby defining space and time completely differently from us. For such a system to operate self-consistently, it would likely require significant modifications to our existing physical concepts.

If we go to even more extremes, some areas within the brain might even be isolated by "event horizons" (like inside a black hole), making it even harder to maintain a single experience. Perhaps one can only maintain some semblance of experience by "freezing" certain experiences to avoid fragmentation at the horizon.

What if we were smaller? For example, if the brain only contained a few hundred "space atoms." Perhaps irreducibility would dominate everything, and we would never be able to obtain overall laws or predictability of the universe, let alone develop a "linearized consciousness."

As for the "scale in branchial space"? Our perception of the real world as "definite" means that in the branchial space we inhabit, we can merge numerous historical branches into a specific state at a given instant. But how much does this affect the rest of the universe? Actually, just like the "speed of light," there is a quantity in our model like the "maximum entanglement speed." It is large enough that, on the scale of "branchial space" relevant to human daily life, it allows us to confidently assign everything to the same time slice, forming that single thread of history.

Thus, we see that "human scale and characteristics" are precisely well-suited for forming our understanding of consciousness. Are there other possibilities for consciousness states?

This is a rather tricky question. Having the ability to form a coherent, linear experience like ours from within is certainly possible, but it depends on specific physiological and physical conditions. In other words, if we imagine a consciousness completely different from ours, it's not just about different sensations and ways of thinking, but the fundamental description of the physical world might be completely different.

What comes to mind more easily is the consciousness of "other animals and organisms." But confirming how they think and experience the world is not simple. Perhaps in the future, we can explore this through advanced means of interacting with animals (such as some kind of VR game); but currently, we still know very little. It can be expected that their consciousness will be different from ours, for example, with different sensory inputs. Some perceive the world through smell, electrical signals, temperature, pressure, etc.; some are "group minds" like ant colonies or bee colonies, with very slow information integration speed; some are fixed in the soil like plants, so if they truly have some integrated experience, what would it be like; and then there are viruses, where sequential experience can only be discussed at the level of an epidemic wave...

Stepping back, even within humans themselves, the body has more than just the brain as a "sensory integration system." For example, the immune system also constantly performs some kind of "feedback" to the external world and itself, although the inputs and outputs are completely different from the brain. We certainly find it strange to attribute consciousness to the immune system, but imagine if we truly had a way to enter the "perspective" of the immune system, it might reveal another kind of "internal physics"!

Expanding further, we can also talk about all life on Earth, or the geological history of the Earth itself, or the weather, etc. You could say that the fluid motion of the weather also contains rich computational complexity; but it lacks the mechanism of "integrating and linearizing," so it doesn't appear to be evolving as a single stream of thought.

Returning to software and AI systems, we might instinctively think that for them to "have consciousness," we must make breakthroughs at a higher level, introducing some "mysterious spark like humans." But I think perhaps the opposite is true: if we want a system to exploit the richness of the "computational universe" as much as possible, it's best to let it operate with a lot of parallelism, and even multiway, like the underlying physics. However, if we want to obtain something similar to "our consciousness," we need to step back and specifically force the system to converge into an "integrated and linearized" form. And a large part of why Wolfram Language can produce readable results in the computational universe is precisely because it is designed according to human thought patterns.

Similarly, if we ask, "What would the 'internal physics' of such a system be?" Because Wolfram Language was originally modeled after human thought, this "internal physics" is largely similar to the physics we are familiar with.

Since our physics project turns everything into a pure formal system, this leads us to think: can we also talk about consciousness within the framework of mathematics? For example, imagine a formal mathematical system based solely on axioms, which constantly generates a network of theorems, which is equivalent to unfolding in "metamathematical space." There can actually be many interesting analogies between physics and metamathematics: in metamathematics, time is still time, except here it refers to the "process of continuously proving new theorems." And corresponding to our spatial hypergraph is the "graph formed by all theorems proven up to a certain moment." In addition, a multiway graph can be constructed where different inference paths merge into theorems, corresponding to the multi-history merging of quantum mechanics.

What does the reference frame correspond to here? Just as in physics, the reference frame corresponds to the observer, except this observer is facing metamathematical space, not physical space. Different observers can explore theorems in different orders, and causal invariance guarantees that they will all see the same "mathematical truth." Here too, there is a "mathematical influence propagation speed" similar to the speed of light, and a "relativistic" conclusion: mathematics itself is consistent, it's just that we can explore it in different orders.

So, what is "mathematical consciousness"? According to our previous line of thought, if we want a similar "reference frame," we need to "linearize" metamathematical space as well, which is equivalent to having an "embedded mathematician's brain" that can only process a part of the theorems at a time, thereby forming a "sequentialized" mathematical development process. For example, in mathematics currently accessible to humans, any so-called "human-scale" mathematical theorem might require around 10^5 basic inference units; and in the entire history of mathematics up to now, probably only around 3×10^6 theorems have been proven. Therefore, "mathematical consciousness" can be regarded as the process of "integrating a linearized inference chain within this limited metamathematical space." In this way, we can also perform analyses similar to reference frames at this abstract level.

Going up another level, there is an even larger perspective: we can not only choose to apply the same rule at different locations in the hypergraph, but also choose from "all possible rules." This yields the "rulial multiway graph," where paths represent evolving the universe with different rules. And in this higher-level multiway graph, causal invariance always holds, meaning that there is a fundamental equivalence between different rules (i.e., different physical theories).

This is another formulation of the "Principle of Computational Equivalence": no matter which universal rule you use to "construct the universe," they can simulate each other, it's just a different "rule reference frame."

What role does consciousness play? Different "rule reference frames" may correspond to completely different physical descriptions or empirical worlds. One observer might perceive the world as our set of "hypergraph + space," while another observer might feel they are in a world of a single-headed Turing machine. Their physical pictures are completely incompatible.

But can they both find their "linearized" paths? Theoretically, perhaps there are some reducible areas within each rule reference frame. But whether this "alien intelligence" is precisely sampling that area is unknown. In other words, to have "useful physical laws," some reducibility is needed to bridge the gap—it doesn't necessarily have to be obtained through a "sequential experience" like ours. But we currently rely on it to obtain laws and perform scientific induction.

Understanding this "higher-order alienness" is very difficult: they might use a completely different rule reference frame from us, and even a completely different "reducibility extraction method" from ours. For us, identifying where they have found reducibility would be a great challenge, because we do not possess the same "sequential consciousness." In other words, we don't even have appropriate means to treat them as an observable object. If there truly exists alien intelligence so different from us, we would be almost unable to understand each other.

VI. Current State

Discussions about consciousness have been difficult and prolonged for hundreds of years. But with the new insights brought by our physics project, perhaps we can re-examine it in a way that is closer to formal science. Although I have not done rigorous mathematical modeling here, I believe that we can fully translate the ideas mentioned here into more formal models and then explore the various connections of the consciousness problem in the field of physics, especially in quantum mechanics.

How complex the physical details ultimately need to be for these models is unclear; perhaps in a simple multiway Turing machine, one can study "how a multiway brain perceives a multiway world"; perhaps a combinator system can provide some insights into how "different versions of physics" are formed.

The key is: we may now truly be able to transform the problem of consciousness into more specific mathematical, computational, or logical problems, and study them in a rigorous and actionable way.

Ultimately, for these discussions to be truly grounded, they need to be linked to actionable application scenarios, otherwise, they may devolve into arguments about concepts or terms. For example, in the field of distributed computing, we have always wanted to find a more intuitive way to describe the problem of "multi-point parallelism." And from the perspective of consciousness, we seem to be inherently awkward with distributed computing, precisely because our brains are "sequentially processing." Perhaps borrowing methods from physics, introducing a "reference frame" style abstraction to distributed computing can help us better design and understand distributed systems.

Similarly, the reason why multiway or non-deterministic computation is difficult to grasp intuitively is probably also because our "consciousness structure" is biased towards linearity. Therefore, perhaps we also need to draw on the ideas explored in quantum mechanics and establish a measurement-like mechanism for multiway computation to make it manageable for us.

A few years ago, at an AI ethics conference, I asked, "Under what circumstances would we grant AI 'rights' and 'responsibilities'?" A philosopher immediately replied, "When they have consciousness!" But what counts as AI having consciousness? If we adopt the view discussed above, the answer lies in: not only having complex computation, but also being able to integrate and form a coherent "stream of experience." It is conceivable that "killing" a system with a single stream of experience feels more like destroying a unique existence than destroying a system that is "parallel everywhere with no fixed main thread." Because a single-threaded stream means it has only one "self-evolution path," which often makes us more inclined to grant it the status of a "moral subject."

Similarly, when we talk about "explainable AI," we often want not just a list of all the computational steps the AI performed—which might be too complex or irreducible—we also want a narrative like a "story" that allows us to understand it in a linearizable way. This is essentially "translating" the AI's computation process into a representation compatible with our consciousness structure.

The Principle of Computational Equivalence is often used to argue that in the universe, we humans do not occupy a "special, transcendent" position; life and intelligence are merely instances of large-scale computational complexity. But at the level of consciousness, the same principle holds true. Consciousness, in general, may not be special either; theoretically, there are countless ways to "retrieve" reducibility. What we cherish is only the specific kind of consciousness of our own species, which is the specific realization of "serializing computation."

Main Tag:Consciousness

Sub Tags:Wolfram Physics ProjectGeneral RelativityQuantum MechanicsComputational Universe


Previous:Stanford's Weak-for-Strong (W4S): Harnessing Stronger LLMs with Meta-Agent, Accuracy Boosted to 95.4% | Latest

Next:Forcing Models to Argue with Themselves, Recursive Thinking CoT Version Soars in Popularity! Netizens: Isn't This Just the Usual Trick for Most Reasoning Models?

Share Short URL