A computer can process numbers, but a human brain is different – we have a subjective experience. That, at least, is the conventional wisdom. But what exactly is conscious experience, and how is it related to the somewhat-computer-like processing of information that occurs in the brain? Two main camps dominate the modern research on consciousness. One of them could be called the magicalist camp, and the other, the mechanist camp. The magicalists, who make up by far the largest cohort of consciousness scholars, would probably be offended by my term. Many of them would not even consider themselves to belong to that camp, but instead would mistakenly think they are mechanists. In this article, I will outline what I believe these two camps to be, at least in their essence. I’ll also explain why I believe the magicalists are outside of science and can by definition never find an answer to the questions of consciousness, whereas the mechanists have already found the core answer and are now engaged in uncovering the details.
Here is one way to describe the position I have termed magicalist. The human brain processes information, but for some relatively small proportion of that information, an additional property emerges: subjective experience. We don’t just process the raw numbers on touch, or colour, or emotion. We feel. We experience. As the philosopher Thomas Nagel put it, there is “something it is like” to see red, or hear a sound, or think a thought. That extra piece, “experienceness” for lack of a better word, layered on top of the information processing, is fundamentally a non-physical item. It may emerge from physical processes, but the experience, the feeling, is not itself a physical object, or a chunk of information. It is what it feels like for a person to process at least some information, some of the time.
In that magicalist perspective, our task, as consciousness researchers, is to accept that experienceness exists, and to try to answer how the brain produces it or how it can be present anywhere in the universe. The philosopher David Chalmers called that task the hard problem – meaning, the impossible problem. That approach has dominated the field of consciousness studies for decades. It is intuitively compelling, because we can all introspect and find subjective experience inside us. However, by assuming the existence of that non-physical magic and then setting the challenge of explaining how the magic comes about, the approach defines itself out of business from the start.
It would be like saying, “Let’s assume that the sun moves through the sky and loops over a flat earth, because we all know from personal observation that these things are true. Now, let’s explain the science of a moving sun and a flat earth.” That approach is, of course, precisely how people thought about that problem for millennia. But if you formulate the question in that manner, you won’t succeed at anything beyond pseudoscience, because the fundamental assumption is wrong and the question is ill posed.
The main alternative explanation is the mechanist approach. I will put it in my own words, but essentially similar concepts can be found among many other scientists and philosophers such as Daniel Dennett, Patricia Churchland, Keith Frankish, and others.
Everything you claim to be true about yourself, no matter how much you insist it isn’t just a claim, no matter how certain you are that it is actually real and actually inside you, derives from information in your brain – or else you wouldn’t be able to formulate the thought or make the claim. There is no wiggle room around this logic. Moreover, there is no guarantee the information is accurate. It probably is not. The brain constructs simplified, imperfect models, or bundles of information, to describe itself and the world around it. The brain’s internal models are approximate, schematic, useful, but never literally or precisely accurate. The pertinent questions, therefore, are: why does the brain construct a self-model that tells it that it has a non-physical, magic experienceness? Is there any specific survival advantage to that kind of self-model? What networks in the brain compute that specific information set? And even: can we build an artificial machine that does the same?
The key concept here is the relationship between the information constructed in your brain, the properties that you think you have, and the properties that you actually have. For example, the brain contains a complex, shifting pattern of blood flow, but unless you looked it up in a book, you wouldn’t know it, and you certainly don’t have cognitive access to your own, specific, moment-by-moment pattern of blood flow. Lacking all information about it, the system can’t think it has it or claim to have it.
Or, on the opposite side, some people have an arm amputated and then develop a phantom limb. There is no real arm anymore, but the brain has constructed information about an arm, and cognition has access to that information. Hence, that person can introspect about it and talk about it. If you think you have property X, there is one and only one reason: your brain contains information that tells the larger system, “I have X”. The reason why you think you have the property of experienceness, layered on top of processing colour and touch and emotion, is strictly because a specific set of information in your brain tells the larger system that it has that property. Without that specific information, you would not know what subjective experience is, would not think you have it, and would not claim to have it. Again, there is really no wiggle room around this logic.
The mechanist perspective is almost impossible for people to wrap their minds around on first encounter – or second or third, for that matter. It is fundamentally non-intuitive, because it violates the natural information constructed in the brain that we access cognitively every day. To try to explain the perspective better, I will pose some of the most common questions and objections that I hear.
One common question goes like this: “But how can a model in the brain – a bundle of information – give me a subjective feeling? Where does the subjective feeling come from?” This question arises from the sheer non-intuitive bizarreness of the mechanist explanation. It is intrinsically hard for human beings to make the intellectual connection. The answer is, no model in the brain gives you a subjective feeling. Nothing does. Your brain constructs information. It informs itself that it has subjective experience, it thinks (in the sense of cognitive processing) that it has subjective experience, and it claims that it has subjective experience. The system cannot escape its own internally constructed information.
A second common objection goes like this: “But I know I have a conscious experience, because I’m experiencing it right now.” Almost every argument on the magicalist side reduces, sooner or later, to that proposition. But it is logically circular. It uses the presence of experience as evidence for the presence of experience. It is literally, “X is true because X is true”. No matter how compelling that argument may be to you, it is not logically sound. It is a machine stuck in a logic loop. The machine accesses internal information, is captive to that information, and reports what the information tells it.
A third common question is: “Why would the brain trick itself? What could possibly be the benefit of constructing such a bizarre and specific model of itself, telling itself that it has subjective experience?” Here is where my own work on consciousness comes in. I do not think the brain is tricking itself: I think it is providing itself with useful information.
Together with my colleagues in my lab, we have proposed what we consider to be an obvious suggestion. The brain constructs models to represent and monitor real items in the world. The brain’s model of the arm – the information that lingers on in the case of a phantom limb – normally represents a real arm. Just so, when we claim to have subjective experience, logically that claim is based on information constructed in the brain, and surely that information models some specific process or item in the real world. We posed ourselves the question: what is that real item?
To get at what I mean a little better, imagine having a subjective experience of the colour red; or the sound of thunder; or a touch on the back of your hand; or thinking that 2 + 2 = 4; or a moment of happiness. Now consider what is the same across all those instances of subjective experience. None of the details are the same, none of the content. The commonality is the sheen of experience, the essence of awareness, the seeming inner eye. The brain must construct information that tells us we have an experienceness, and that information must be present in each of the specific circumstances that I just noted. What real physical item is represented by that experienceness information? Or, is there any real physical item, present in each of those circumstances, which the brain could usefully model with a chunk of information?
For fifty years, cognitive psychologists noticed a peculiarity. If you’re attending to thing X – meaning, your brain is focusing at least some processing resources on it – then you will probably also claim to have a subjective experience of X. Attention and subjective experience almost always go hand in hand. At first, that pairing may seem obvious, but the more you think about it, the more interesting it becomes. Attention, at least as psychologists mean it, is a purely mechanistic, physical process. A few select signals in the brain are boosted in strength and processed in greater depth. Attention can be measured objectively with reaction times, or electrodes, or brain scans. It can even be built relatively easily into electronic circuits. Why should the claim of subjective experience so closely track attention? And even more strangely, how come the two can sometimes come apart? For example, in a laboratory, if you’re subjected to many carefully presented distractors, your brain can end up focusing attention on thing X, while at the same time, you are not subjectively aware of X. Why does subjective awareness track attention a good 95% of the time, but not quite 100%?
The answer to this decades-old question, we suggested, is that awareness relates to attention the way the brain’s arm schema relates to the arm. The arm schema is a bundle of information that tracks the arm, pretty well, most of the time, unless something goes wrong. Its job is to monitor and represent the arm, so the brain knows what your arm is and can better control it. Just so, maybe we claim to have a subjective experience because of a bundle of information in the brain whose job is to monitor and represent attention.
Attention and subjective experience are peculiarly similar in spirit. If you were to describe attention, but wanted to leave out the technical, physical details like the neurons, competition between signals, and brain areas, and instead give a kind of shell description of it, you’d be left with something that sounds a lot like subjective experience: a vivid mental possession of something that enables us to react. Maybe we think we have subjective experience because we are drawing on a specific information set inside us, and that information set is the brain’s simplified, schematic account of what attention is.
This theory, the attention schema theory, gives a simple, logical, working explanation of consciousness. Or, more precisely, it explains why we think we have a sheen of subjective experience layered on top of the information that is deeply processed in our heads. One of the advantages of the theory is that it does not try to explain everything. It does not explain the whole stream of consciousness or sense of self. It provides one working component, an especially important one, that can be incorporated into a great range of other specific, mechanistic theories: theories about attention, about cognition, about emotion, about the vast and complicated ocean of processes and material within consciousness.