Interview by Neil Manson
Manson: You have argued that our concept of consciousness is, as you put it, a mongrel concept. What is a mongrel concept? Is it just a concept like religion that applies to a number of different practices, creeds, systems of belief and worship, that may have very little in common? Is it something more than just the claim that there are different kinds of consciousness – self consciousness, consciousness as wakefulness and so on?
Block: You appear to want to define ‘mongrel concept’ in terms of heterogeneity in the referent; you define it in terms of what the concept is a concept of. I introduced the term with a very different idea in mind involving a kind of heterogeneity in the concept itself. By your definition, but not mine, dirt is a mongrel concept because there are many different kinds of dirt. Compare dirt with speed, a concept used by Aristotle to cover both instantaneous and average velocity. Speed, like dirt, has heterogeneous referents, but unlike dirt, speed also has heterogeneous conceptual elements. One doesn’t need to make any observations to realise that speed as used by Aristotle covers quite different things. A crucial feature of speed is that the conceptual elements (average and instantaneous velocity) are magnitudes of very different sorts. They are so different that from a scientific point of view, there would be no utility to a concept of speed that covered both. This is typical of mongrels – once you distinguish the conceptual elements, you see no need for a single concept that lumps them together – at least for scientific purposes.
This latter point is part of what distinguishes mongrels from cluster concepts. A typical cluster concept like religion lumps quite different conceptual elements together. But the family of socio-cultural institutions that have referents picked out by those elements are a sociological natural kind. Mongrels like speed as used by Aristotle or degree of heat as used by the Florentine Experimenters have elements that do not go together so well, and that typically makes them more ripe for conflation with one another. The Florentine Experimenters word translated as “degree of heat” was indeterminate or ambiguous as between an intensive and an extensive quantity. They counted x as having a higher degree of heat than y if x melted paraffin whereas y didn’t. (This is the intensive quantity, temperature, at work.) But they also counted x as having a higher degree of heat than y if x melted more ice in a specified time. (This is the extensive quantity, heat, at work.) Of course, the difference between cluster concepts and mongrels is not sharp and it is context relative, at least if the likelihood of conflation is important for the application of the term. Any cluster concept can give rise to conflation in some circumstances. One might say “He is religious'” on one occasion meaning that he has a firm belief in a supreme being. On another occasion, one might say “He is not religious” meaning that he does not follow religious rituals. Thus one may end up apparently contradicting oneself, as Aristotle did and as the Florentine Experimenters did. If a concept promotes this sort of conflation and has no scientific unity, it is a mongrel concept. (The concept mongrel concept is a cluster concept whose two elements are: [1] no scientific unity and [2] promotes conflation.) The concept of consciousness lumps together phenomenal consciousness, access-consciousness, reflective consciousness and self-consciousness. I argue that these are scientifically disparate things.
Manson: You lay particular stress upon the fact that there is a crucial distinction to be made between the concepts of phenomenal consciousness and access consciousness. Phenomenal consciousness is the much discussed “subjective character” or “qualitative character” of experience. But what about access consciousness? What’s that?
Block: A representation is access conscious if and only if it is poised for global control, including control of reporting, reasoning and action. One who accepts Freudian ideas can allow the possibility that memories of a traumatic event (e.g. being abused as a child) might be repressed. The victim might have a repressed image of the abuse, leading to slips, dreams, anxiety in circumstances that remind him of the circumstances in which the abuse took place and the like. The victim might nonetheless deny that any abuse took place and might be unable to answer simple informational questions about the abuse. Images of the abuse might be vivid but not be very accessible to the machinery of reasoning, reporting and control of action. In this sense then, the visual images of abuse might be unconscious. Such possibilities are one reason for recognising an access sense of ‘conscious’, a sense in which such images are unconscious despite being phenomenal, and therefor phenomenally conscious. (You don’t raise any doubts about the idea that phenomenal consciousness is a type of consciousness, so I won’t respond to such doubts.) Access-consciousness in its ordinary and Freudian uses is a vague notion which can be sharpened in a variety of ways. I have sharpened it in a way that allows the most direct comparison with phenomenal consciousness. My aim is to facilitate thinking about the question of whether phenomenal consciousness is a state that consists in a certain sort of information processing.
Manson: Are access and phenomenal consciousness unrelated? Are there any examples of access consciousness without phenomenal consciousness or vice versa?
Block: I just gave a hypothetical example of phenomenal consciousness without access-consciousness – the repressed image. Here are some other hypothetical cases:
- Perhaps centres of reasoning and control can be damaged without destroying phenomenal consciousness, e.g. in frontal lobe damage.
- Some subsystems might have phenomenally conscious representations without the machinery sufficient for access-consciousness, e.g. visual area V1, the first way-station in the brain’s visual processing.
- Some flickers of phenomenal consciousness may be too brief to set up the machinery of access.
- Setting up of access may be blocked by visual masking. (See my BBS article for an explanation of this.)
- When the refrigerator goes off, one sometimes has the sense that one has been hearing the sound for some time without noticing it. Of course, the experience may be triggered by the change in the sound even if one had not been hearing it before the change. But the sense that one had been hearing the sound may also be veridical, and if so, we may have an example of phenomenal without access consciousness in the period just before the refrigerator goes off.
What about the converse, access-consciousness without phenomenal consciousness? There is one potential case in the literature (Brain and Cognition, 1991) involving a patient who has lost most of visual area V1. He says he is blind and is at chance at saying whether the lights are on or off. But if words or pictures are projected to the remaining island of V1, the patient can read the word and recognise the faces. But he says he is not really seeing them. He has no visual sensations. He says “It clicks”. There is nothing in the situation that suggests any motivation for lying. It may be that his inferior parietal damage prevents attention required for visual phenomenal consciousness.
Manson: It has been objected that access consciousness isn’t really a notion of consciousness at all: it’s just some information-processing link between different modules or different parts of the brain. In defense of the claim that access consciousness is, properly, a concept of consciousness you claim that you have drawn the concept from our “everyday thinking about the mind”. What aspects of our everyday conception of the mind most support the idea that there is a concept of consciousness that is logically independent of our conscious, phenomenal, experience?
Block: I have mentioned a few cases already that suggest that access-consciousness has roots in ordinary thinking. It is natural to say that before the refrigerator went off, the experience of the sound of the refrigerator was unconscious. This is plausibly the access sense of the term. The Freudian repressed image is plausibly unconscious in the access sense. Though many Freudian ideas are far from common sense, the concept of repression has become part of our culture – and may have long predated Freud.
(The sound of the refrigerator and the Freudian repressed image are also unconscious in the reflective sense – they are not accompanied by a thought to the effect that one has them. But we typically don’t call states ‘unconscious’ just because they are not conscious in the reflective or self-conscious senses. As we pass through life, we have many transitory sensory impressions that are phenomenally and access-conscious yet probably are not the topic of any thought. We do not regard these transitory sense impressions as unconscious.)
Suppose I am right that access-consciousness has its roots in common sense. It does not follow that it is “properly” a concept of consciousness. There is a problem with regarding access-consciousness as a stand-alone type of consciousness that was first raised by Tyler Burge. The problem arises from the familiar science fiction zombie, a creature whose information processing is exactly like ours but who (or which!) has no phenomenal consciousness. Let us assume that this is a conceptual possibility. Is the zombie conscious in any sense of the word? It is natural to answer no. If this is right, what it shows is that the ordinary term ‘conscious’ only applies to access-consciousness against a background of the presence of phenomenal consciousness. Access-consciousness in the ordinary sense is not a purely information processing notion. Then my use of “access-consciousness” to denote a type of information-processing strains ordinary language a bit. But given the welter of confusions surrounding “consciousness”, I think we do well to idealise a bit, distilling the pure forms of the main ideas. Purity is more important than the nuances of ordinary language if we are to be clear about the issues.
Manson: You have pointed out that many accounts of consciousness in cognitive science and neuroscience fail to keep apart phenomenal consciousness and access consciousness. They fail to recognise the mongrel nature of the concept. Why do people conflate the two concepts? What do you take to be the consequences of this equivocation over phenomenal and access consciousness?
Block: When we ask for more beer, we don’t think about the question of whether we want more weight, more mass or more volume. In normal circumstances, there would be no point in making such distinctions. In an environment of zero gravity, however, the distinction between weight and both mass and volume could loom large. Similarly, there is normally no point in making any distinction between access and phenomenal consciousness. They go together. The need for making distinctions enters when we think about unusual cases. For example, Penfield noted that victims of petit mal seizures could walk or drive home over a familiar route during their seizures despite being “totally unconscious”. Searle accepted Penfield’s description, but said in another context that an unconscious driver would crash. When he allowed that an unconscious driver could drive home, he was thinking of phenomenal consciousness but when he said an unconscious driver would crash, he was thinking of access-consciousness. I have argued that this sort of conflation leads to bad reasoning about the function of consciousness.
Manson: Going back to the notion of phenomenal consciousness for a moment: you argue that the qualitative, phenomenal aspects of our conscious experience cannot be accounted for purely in causal, or functional terms. You also argue against “representationalist” theories of perception. On the view you attack if, for example, I see a heap of cherries in a blue bowl, the claim would be that there is nothing more to my experience than what can be accounted for in representational terms. For such thinkers, what matters is the existence of causal, or informational links to objects and properties in the perceived world. Whilst you agree with the representationalist that our experiences are about things you argue that there is, what you refer to as, mental ink or mental paint in virtue of which our conscious experience represents the properties of the perceived world. Thus, you claim that there is something more to experience than can be captured in purely causal or informational terms. Is it just a contingent fact about our psychological make up that we happen to represent the world in this way – or, is it the case that there is some essential link between having sensory qualities to our experience and those experiences being about things?
Block: Children’s colouring books sometimes represent the colour to be filled in by number. The digit ‘1’ in a space represents red, ‘2’ represents green, etc. These representation relations are entirely contingent. According to me, there are also phenomenal representation relations that are contingent. The phenomenal character that represents red in me might be the same as the one that represents green in you. But I also think that there are some non-contingent aspects of phenomenal representation. Compare mental images of circles with mental images of squares. If the images are sufficiently robust and detailed, there is a difference between them: the circle-representing images cannot avoid “spaces” between items, but the square-representing images can. This derives from a geometrical fact: squares are “packable” but circles are not. Also, there is a difference in symmetry corresponding to the geometrical fact that circles have an indefinite number of such axes whereas squares have just four.
Manson: In addition to the claiming that conscious perceptual experience involves both representational and qualitative properties (mental ink) you also argue that sensations such as pains and orgasms involve a third kind of mental property. In line with the mental ink metaphor you call these properties mental fonts. Unlike mental ink such properties do not play any representational role at all. But is it not the case that pains and orgasmic feelings are represented as being located at or around parts of the body – does this not count against the idea that they are non-representational?
Block: Pains and orgasms are located. I think it is best to see this as pain-experiences and orgasm experiences representing location in order to avoid questions about mysterious entities like pains located literally in the foot. Pain experiences and orgasm experiences also represent temporal properties, for example increases and decreases in intensity. But I doubt that it is these representational properties that are so phenomenally impressive about orgasm and pain experiences. Even though the phenomenal character of orgasm does represent, one should not conclude that this phenomenal character is exhausted by that representational content. The claim that what we like about orgasm is merely the representation of place, time, intensity, etc. is a bold and completely implausible conjecture, at least initially. Representationists (those who think the initial implausibility can be shown to be misleading) should admit that their view starts out with two strikes against it.
Manson: What about moods and feelings? Is an undirected feeling of anxiety a “mental font” in your view?
Block: The existence of something that is both phenomenal and non-representational is not as obvious in anxiety as it is in orgasm.
Manson: Let me now turn to the metaphysical aspects of your conception of consciousness. You argue that phenomenal consciousness cannot be captured in “functional” or “causal” terms. Does this imply that phenomenal consciousness is, in any sense, non-physical?
Block: No, because there remains a physicalist option, namely the old-fashioned mind-body identity theory first introduced by Smart and Place. The claim that phenomenal properties are neurological properties is incompatible with the claim that phenomenal properties are functional properties – assuming multiple realisability.
Manson: Much recent writing on consciousness stresses, in one way or another, the mysteriousness of consciousness. In your terms the mystery, if there is any, attaches to phenomenal consciousness and not to access consciousness. Given that you take phenomenal properties to be physical does this mean that you think that consciousness is unmysterious?
Block: No – the mystery is how phenomenal properties could be physical. This riddle is what Nagel and Levine were responding to in their papers on the “Explanatory Gap”.
Manson: Finally, will consciousness ever be scientifically explained? If so, would you be willing to lay a bet on how long that might take?
Block: I’m optimistic about an eventual unraveling of the mystery, though I think the demystification will have to be a joint effort of philosophers and neuroscientists. In my view, consciousness is not as mysterious as it seems to be – the explanatory gap is not as unclosable as it seems to be. Identities, generally, are primitive and unexplainable. You can’t explain why Mark Twain is Samuel Clemens or why water is H2O. The best you can do is to explain why the mode of presentation associated with ‘Mark Twain’ presents the same thing as the mode of presentation associated with ‘Samuel Clemens’. But explaining why two modes of presentation present the same thing is not the same as explaining the identity. What is peculiar about about consciousness is that the mode of presentation is the same as what is presented (as Kripke famously noted). Thus it makes sense to ask why consciousness = brain state B. These physicalistic consciousness identities are the only identities that it makes sense to query, and that makes them different from all other reductive identities. In my view, this fact is the source of the illusion of unexplainability of consciousness.