Predictive Processing and the Epistemological Hypothesis: Solving the Hard Problem of Consciousness by Simulating a Brain Facing It

Predictive Processing and the Epistemological Hypothesis: Solving the Hard Problem of Consciousness by Simulating a Brain Facing It.

Abstract

When we say that a theory explains our subjective experience, we simply mean that: if the theory is correct—if, for example, our brain is as the theory stipulates—then our subjective experience is indeed as we experience it. Scientists usually express this idea using the concepts of “observation” and “prediction”: a theory explains our subjective experience if and only if it predicts first-person observations.

Some thought experiments suggest that current theories are unable to make such predictions. For example, if we had never seen colors in our life and therefore did not know what it is like to see blue, these theories would be unable to let us deduce (i.e., predict) what it is like to see blue. This well-known issue is often referred to as the “Hard Problem of Consciousness” (HPC). Here, we examine this problem through the lens of the epistemological hypothesis.

Under the epistemological hypothesis, the HPC no longer reflects our theories’ inability to predict first-person observations; it reflects our inability to deduce their implications for first-person observations from these theories. The HPC thus becomes an epistemological problem that can be formulated as follows: if we cannot deduce what a given theory implies for first-person observations, how can we know that this theory explains first-person observations?

In this paper, we outline an experimental approach to test this epistemological hypothesis and solve the hard problem of consciousness. Notably, this approach allows us to experimentally test any identity hypothesis and to solve the meta-problem of consciousness.

Subsequently, we highlight the striking agreement between this approach and the predictive processing theoretical framework. We show how a predictive processing-based theory of consciousness entails the epistemological hypothesis—the theory predicts our inability to deduce its own implications for first-person observations.

Finally, this work suggests that the predictive processing theoretical framework may already possess the resources required to simulate a brain facing the HPC.

Keywords: hard problem of consciousness, epistemological hypothesis, type-B materialism, meta-problem of consciousness, identity hypothesis, predictive processing, precision, quality space

1 Introduction

There is no consensus on the problem of consciousness (i.e., phenomenal consciousness, first-person observations, phenomenology, mental life, the inner world, lived experience, manifest reality, conscious experience, what-it-is-like-ness [1], qualia [2], or subjective experience). When we say that a theory explains our subjective experience, we simply mean that: if the theory is correct—if, for example, our brain is as the theory describes it—then our subjective experience is indeed as we “see” it “from within” (i.e., as we actually experience it).

In other words, we mean that the theory necessarily implies that our subjective experience is such as it is. Scientists usually express this idea using the concepts of “observation” and “prediction”: a theory explains our subjective experience if and only if it predicts first-person observations.

Some thought experiments suggest that current theories are unable to make such predictions. For example, if we had never seen colors in our life and therefore did not know what it is like to see blue, these theories would be unable to let us deduce (i.e., predict) what it is like to see blue. This well-known issue is often referred to as the Hard Problem of Consciousness (HPC) [3–5] (see also the concept of the “explanatory gap” in [6]).

When faced with the HPC, a common reaction is to conclude that current scientific theories are unable to explain our subjective experience. There is, however, another way to see the HPC. Many neuroscientists and philosophers would likely agree with the following two statements:

1.What is happening in our brain at a given moment necessarily implies that our subjective experience is such as we “see” it at that moment (e.g., the subjective experience of seeing blue).

2.If we had never seen colors in our life, even if we knew everything about what is happening in our brain when we see blue, we would be unable to deduce (i.e., predict) what it is like to see blue from it.

These two statements taken together necessarily imply that we are unable to deduce the implications for first-person observations of a given physical or functional state of the brain. In other words, if we accept these two statements, then we have to think that: the HPC does not reflect our theories’ inability to predict first-person observations; it reflects our inability to deduce their implications for first-person observations from these theories.

This is what we call the epistemological hypothesis (and what philosophers would rather call type-B materialism [7], or at least certain versions of type-B materialism). The basic idea behind this hypothesis is that: the HPC is an epistemological problem, not an explanatory one. If the problem of consciousness is “hard”, it is not because it is hard to develop a satisfactory theory of consciousness, but because it is hard for us to show that this theory is satisfactory. Indeed, if we cannot deduce what a given theory implies for first-person observations, how can we know that this theory explains first-person observations?

This understanding of the HPC thus makes plausible the hypothesis that we may already possess a relatively satisfactory theory of consciousness, but that we simply cannot show it. As long as we don’t have a theory that explains consciousness and don’t have sufficient reasons to believe that this theory explains consciousness, no consensus on the problem of consciousness can be reached.

This paper aims to answer two questions. First, how can we experimentally test the epistemological hypothesis? Second, how can we solve the HPC under the framework of the epistemological hypothesis?

Our reasoning is structured into four main steps:

1.Using two well-known thought experiments, we highlight current theories’ apparent inability to explain first-person observations. Importantly, these thought experiments are formulated in a specific way that will play a key role in our reasoning;

2.We point out that these thought experiments can, in fact, be interpreted in two ways:

(1)Current theories are unable to explain first-person observations;

(2)We are unable to deduce their implications for first-person observations from these theories; keep in mind that the second interpretation raises an epistemological problem;

3.We present an approach that both enables us to experimentally test the epistemological hypothesis and to solve the HPC as an epistemological problem. Notably, this approach allows us to experimentally validate any identity hypothesis;

4.Finally, we highlight the striking agreement between this approach and the predictive processing theoretical framework [8].

2 The Hard Problem of Consciousness

2.1 The neuroscience of consciousness

Neuroscientists test their theories using neurophysiological and behavioral measurements. These measurements are usually referred to as third-person observations (i.e., seen “from without”). If our ultimate goal was merely to provide an exhaustive explanation of these third-person observations, then we would not have to be concerned with the issue discussed in this paper. This kind of goal falls under the category of the “easy problem of consciousness”¹ [3–5].

The problem that interests us here—i.e., the Hard Problem of Consciousness (HPC)—only arises when we deal with what are referred to as first-person observations (i.e., experiences seen “from within”).

Most neuroscientists think that if a person has a certain subjective experience, and if this experience is such as it is, this is only because his brain has certain properties and/or functions in a certain way [9]. This hypothesis has led some neuroscientists to explicitly make the explanation of our subjective experience a research objective, thus giving rise to a new field of research: the neuroscience of consciousness [10].

In practice, the theories that emerge from this new field of research are still tested using neurophysiological and behavioral measurements. However, what is really critical in this case is not the behavioral measurements themselves, but what we can deduce from these behavioral measurements².

For example, when someone tells us “I see blue”, we are faced with a simple verbal behavior—a third-person observation. What is critical is that we can reasonably deduce from this verbal behavior that this person is currently having the subjective experience of seeing blue.

Therefore, if we want to evaluate whether a certain theory—say, “Theory A”—is able to explain first-person observations, we simply have to compare this deduced result with the theory’s predictions:

1.According to Theory A, this person should be having the subjective experience of seeing blue;

2.I deduce from this person’s behavior that he/she is indeed currently having the subjective experience of seeing blue;

3.Thus, Theory A’s prediction is correct.

Evidently, the purpose of this kind of approach is to evaluate what we could for now call the fit between first-person observations and the theory’s predictions (i.e., to what extent one matches the other). The core question we are trying to answer is: does the theory imply that first-person observations are indeed such as they are?

It is when we ask this question that the “Hard Problem of Consciousness” (HPC) really comes to the fore³.

Under the materialist hypothesis, a theory of the brain should be sufficient to explain consciousness. Let us give an example to illustrate how a theory of the brain alone can allow us to anticipate—i.e., predict—first-person observations.

Imagine an exceptional neuroscientist who claims to us that “Theory A”—a theory that describes how our brain functions—is able to explain our subjective experience. Subsequently, he invites us to participate in an experiment. All we have to do is enter a room called the “experimental room”.

Before the experiment begins, this neuroscientist gives us two pieces of information:

1.According to Theory A, our brain will be induced into state X when we are in the experimental room;

2.When our brain is in state X, we see blue.

Based on these two pieces of information, we form the following prediction: “I will see blue when I enter the experimental room.”

We perform the experiment and discover that the prediction is correct: we do see blue when we are in the experimental room.

This example provides a preliminary understanding of how a simple theory of the brain can allow us to anticipate first-person observations (at least in principle). As we have seen, this predictive power stems from the following two statements:

1.Theory A allows us to predict the future state of our brain;

2.A certain state of the brain is always associated with the same subjective experience (e.g., state X is always associated with the subjective experience of seeing blue)⁴.

Note that in practice, the predictions made in the neuroscience of consciousness and the way they are inferred are significantly different from those in our example⁵.

In any case, Theory A enables us to successfully predict the first-person observation that we experience in the experimental room, which leads us to believe that Theory A does explain these observations. However, after discussing with a philosopher friend, we decide to challenge the “real” nature of this predictive power using two thought experiments.

2.2 The Mary’s room thought experiment

Our first thought experiment derives from what is known as the “Mary’s room” thought experiment [19, 20]. Here, we ask the following question: if we had never seen colors in our life—if, for example, we had always worn a device that converted incoming light into grayscale images—would Theory A allow us to anticipate the first-person observation that we experience in the experimental room?

Remember that we deduced the implications of Theory A for these observations from the following two pieces of information (see Section 2.1):

1.According to Theory A, our brain will be induced into state X when we are in the experimental room;

2.When our brain is in state X, we see blue.

These two pieces of information led us to the following prediction: “I will see blue when I enter the experimental room.”

However, if we had never seen blue in our life, we would not know what blue looks like. Simply knowing that we will see a color labeled “blue” would not tell us what this color actually looks like. Consequently, we would be unable to translate the sentence “I will see blue” into a real concrete anticipation of the first-person observation.

In short, if we had never seen colors, Theory A would not help us anticipate the first-person observation that we experience in the experimental room. The prediction, which was initially thought to be internal to Theory A, actually depended in part on our prior knowledge of what blue looks like.

The conclusion of this thought experiment appears to be: Theory A is unable to explain what blue looks like (i.e., the subjective experience of seeing blue). This conclusion is based on the idea that when a theory explains an observation, one can use this theory to know in advance what the observation will be.

In other words, if Theory A really explained what blue looks like, then we would have been able to know what it is like to see blue by deducing the implications of Theory A, even if we had never seen blue in the past.

Note that the ideas introduced in this and the following subsection will be further elaborated and clarified later in the paper.

2.3 The inverted spectrum thought experiment

Our second thought experiment derives from what is known as the “inverted spectrum” thought experiment [21, 22].

The idea that a theory explains an observation assumes that if the observation had been different, then the theory would have been faced with a prediction error. Let us give a concrete example to illustrate this idea:

Suppose a certain theory implies that what blue looks like is exactly as we “see” it. This presupposes that if what blue looks like had been different, then the theory would have presented a prediction error—i.e., a gap between what blue actually looks like and what the theory predicts it looks like.

From an epistemological point of view, it is when a theory faces such a prediction error that we are inclined to think that it is, in some sense at least, false and needs to be updated or abandoned. Philosophers would express this idea by stating that when a theory explains an observation O (i.e., predicts O given that condition C is met), a world in which this theory is true and O is not observed when C is met is not conceivable⁶.

This brings us to our second thought experiment:

Imagine a world in which what blue looks like is actually what green looks like (and vice versa). This raises the following question: in such an “inverted” world, would we notice the difference between the perceived color (green) and Theory A’s prediction (blue)?

To answer this question, let us return to the context of the “experimental room”.

As in the real world, we would make the following prediction: “According to Theory A, I will see blue when I enter the experimental room.”

However, contrary to the real world, in this experimental room, we do not see blue but green (i.e., we do not experience the “feeling of blue”, but the “feeling of green”).

The question is: does this imply that we notice a gap between the perceived color (green) and Theory A’s prediction (blue)?

The answer is no. (This conclusion can be deduced and verified theoretically; see Section 6.3.)

Indeed, since the beginning of our life, what blue looks like has been what green looks like. In this “inverted” world, what we call “blue” actually looks like what we call “green” in the real world. Consequently, the prediction “I will see blue” would be understood by us as the anticipation of a color that has the appearance of green (i.e., the anticipation of the “feeling of green”).

In short, we would not perceive any difference between the perceived color and Theory A’s prediction. Even if the appearance of blue and green had been inverted, we would not feel any contradiction between these observations and Theory A’s predictions.

Philosophers would express this idea by stating that a world in which Theory A is true and the appearance of blue and green is inverted is conceivable.

Evidently, the conclusion of this second thought experiment is once again that: Theory A is unable to explain what blue looks like. As in the first experiment, it reveals that the predictive power of Theory A actually depends in part on our prior knowledge about what blue looks like—we were unable to anticipate the first-person observation that we experienced in the experimental room solely from Theory A.

Before proceeding, we need to clarify one point.

Some empirical studies suggest that the degree of difference between two subjective experiences—e.g., color experiences—depends on the degree of “difference” between the brain states underlying them, and more specifically, on the distance between these brain states in a subspace of the neural state space [15, 16, 27–29] (see footnote 4).

Therefore, we could make the following kind of prediction based on the relative difference between state X and other brain states: “The subjective experience of blue should be closer to purple than to red.”

The problem is that when the appearance of blue and green is inverted, this prediction no longer holds: in the inverted world, what blue looks like (i.e., the feeling of green) is no longer closer to purple than to red.

This implies that we could notice a difference between the appearance of blue and Theory A’s prediction simply by comparing the appearance of blue to other colors.

To avoid this objection, we can consider a more radical version of the inverted spectrum thought experiment: in this version, the appearance of all colors is inverted in such a way as to preserve as much as possible their relative differences with each other.

For example, although the appearance of all colors is different from what it is in the real world, the new appearance of blue is still closer to the new appearance of purple than to that of red.

The critical point is that in such an inverted world, it would be almost impossible for us to notice any difference between the appearance of blue and Theory A’s prediction.

In any case, this helps us clarify the following point:

When we say that “Theory A is unable to predict what blue looks like” or that “Theory A cannot be falsified by what blue looks like”, we are referring to the unpredictability of the appearance of blue itself, not its relative difference with the appearance of other colors.

As we will see in Section 6.5, this nuance is of paramount importance.

2.4 Conclusion

In Sections 2.2 and 2.3, we reviewed two thought experiments—which we call the HPC thought experiments. These thought experiments suggest that Theory A is unable to explain our subjective experience of blue.

Crucially, although we focused on the subjective experience of blue, this issue applies to all first-person observations. For example, using another well-known thought experiment—the philosophical zombie thought experiment—we could further illustrate the apparent inability of Theory A to explain the “existence” of our subjective experience [4, 41].

Note that “Theory A” does not refer to a specific existing theory of the brain. Furthermore, in practice, materialist theories of consciousness are not just theories of the brain (see footnote 5). Almost all of them also include a hypothesis about what consciousness is in this context, or at least about what aspect of the brain—such as a mechanism, a process, or a function—is associated with consciousness.

For example, certain versions of the higher-order theory of consciousness include the following hypothesis: consciousness is a higher-order representation of perceptual content [43].

In any case, to our knowledge, no existing theory of the brain and/or consciousness is currently able to allow us to directly overcome the issues described in the previous subsections. In what follows, we will use the term “Theory A” as a general term for any materialist theory of the brain and/or consciousness, unless stated otherwise.

Up to this point, we have not provided a precise definition of the HPC. Crucially, the HPC thought experiments give us the intuition that it is hard, if not impossible, to explain our subjective experience. It is this apparent “hardness” that constitutes the origin of the name “hard problem”.

The HPC is usually formulated as the problem of explaining our subjective experience: “How does the physical brain give rise to subjective experience?” [3]

The problem is that when we define the HPC in this way, we actually implicitly presuppose a certain interpretation of the HPC thought experiments.

In the next section (Section 3), we will clarify this point by discussing the different possible interpretations.

At this stage, we need a more high-level definition: the HPC is the problem that we face when we perform the HPC thought experiments.

3 The epistemological hypothesis

As presented in the previous section, the conclusion of the HPC thought experiments seems to be: Theory A is unable to explain first-person observations.

The purpose of this section is to show that these thought experiments can, in fact, be interpreted in another way. In practice, we are simply reformulating ideas that are already well-known in the philosophy of mind.

First, note that when we say “Theory A is unable to explain first-person observations”, we are actually just stating a hypothesis. To avoid starting off on the wrong foot, we need to go back to the initial observation that led us to this hypothesis.

This initial observation is very simple: the HPC thought experiments simply make us realize that Theory A does not help us predict first-person observations.

The hypothesis “Theory A is unable to explain first-person observations” is thus a way to explain why Theory A does not predict these observations.

Philosophers might use the concepts of “epistemic gap” and “explanatory gap” to express this idea. In the context of Theory A, these two concepts can be defined as follows:

1.The epistemic gap refers to the inability of Theory A to enable us to predict first-person observations;

2.The explanatory gap refers to the inability of Theory A to explain first-person observations (i.e., the fact that Theory A does not imply that first-person observations are such as they are).

In other words, the hypothesis (or deduction) that there is an explanatory gap is a way to explain why there is an epistemic gap (see [7, 44] for similar reasoning⁸).

However, let us reiterate: the HPC thought experiments simply make us realize that there is an epistemic gap.

Crucially, the hypothesis “There is an explanatory gap” is not the only way out.

Indeed, there are two hypotheses that are consistent with the initial observation from the HPC thought experiments:

Explanatory Hypothesis: Theory A is unable to explain first-person observations (i.e., there is an explanatory gap);

Epistemological Hypothesis: We are unable to deduce the implications of Theory A for first-person observations.

These two hypotheses do indeed imply that: Theory A does not allow us to predict first-person observations. The epistemological hypothesis gives rise to what philosophers call type-B materialism or a posteriori physicalism [7, 44, 45]. The purpose of type-B materialism is to make materialism compatible with the HPC thought experiments. As we have just seen, under the explanatory hypothesis, if there is an epistemic gap between Theory A and first-person observations, this is because there is an explanatory gap between them. This way of thinking has led some philosophers to adopt a more radical position: materialism is false [3, 4, 6, 19, 20]. Indeed, this position logically holds if we think that all materialist theories—not just current ones—face an epistemic gap. A type-B materialist would agree with the statement that there is indeed an epistemic gap between all materialist theories and first-person observations: i.e., even a flawless theory of the brain would be affected by this epistemic gap. However, when asked why this epistemic gap exists, type-B materialists reject the explanatory hypothesis and favor the epistemological hypothesis instead. For a type-B materialist, therefore, the fact that all materialist theories face an epistemic gap does not imply that materialism is false (this account of type-B materialism derives from [7], although there may be certain differences).

Overall, as illustrated in Figure 1, we can classify all existing interpretations of the HPC thought experiments using three questions:

1.“Is there an epistemic gap?”

2.“Why is there an epistemic gap?”

3.“Why is there an explanatory gap?”

Main Tag:Consciousness

Sub Tags:Hard Problem of ConsciousnessBrain SimulationEpistemological HypothesisPredictive Processing


Previous:Novel AI model inspired by neural dynamics from the brain

Next:Google Releases 76-Page AI Agent Whitepaper! Your "AI Avatar" Is Online

Share Short URL