Searle's "Consciousness" (2000)
Searle's "Consciousness" (2000)
Feb 13, 2003, 07:06 AM
Joined: Jan 22, 2003
Member No.: 9
by John R. Searle
pdf at http://neuro.annualreviews.org/cgi/reprint...nt/23/1/557.pdf
Until recently, most neuroscientists did not regard consciousness as a suitable topic for scientific investigation. This reluctance was based on certain philosophical mistakes, primarily the mistake of supposing that the subjectivity of consciousness made it beyond the reach of an objective science. Once we see that consciousness is a biological phenomenon like any other, then it can be investigated neurobiologically. Consciousness is entirely caused by neurobiological processes and is realized in brain structures. The essential trait of consciousness that we need to explain is unified qualitative subjectivity. Consciousness thus differs from other biological phenomena in that it has a subjective or first-person ontology, but this subjective ontology does not prevent us from having an epistemically objective science of consciousness. We need to overcome the philosophical tradition that treats the mental and the physical as two distinct metaphysical realms. Two common approaches to consciousness are those that adopt the building block model, according to which any conscious field is made of its various parts, and the unified field model, according to which we should try to explain the unified character of subjective states of consciousness. These two approaches are discussed and reasons are given for preferring the unified field theory to the building block model. Some relevant research on consciousness involves the subjects of blindsight, the split-brain experiments, binocular rivalry, and gestalt switching.
Resistance to the Problem
As recently as two decades ago there was little interest among neuroscientists, philosophers, psychologists, and cognitive scientists generally in the problem of consciousness. Reasons for the resistance to the problem varied from discipline to discipline. Philosophers had turned to the analysis of language, psychologists had become convinced that a scientific psychology must be a science of behavior, and cognitive scientists took their research program to be the discovery of the computer programs in the brain that, they thought, would explain cognition. It seemed especially puzzling that neuroscientists should be reluctant to deal with the problem of consciousness, because one of the chief functions of the brain is to cause and sustain conscious states. Studying the brain without studying consciousness would be like studying the stomach without studying digestion, or studying genetics without studying the inheritance of traits. When I first got interested in this problem seriously and tried to discuss it with brain scientists, I found that most of them were not interested in the question.
The reasons for this resistance were various but they mostly boiled down to two. First, many neuroscientists felt?and some still do?that consciousness is not a suitable subject for neuroscientific investigation. A legitimate brain science can study the microanatomy of the Purkinje cell, or attempt to discover new neurotransmitters, but consciousness seems too airy-fairy and touchy-feely to be a real scientific subject. Others did not exclude consciousness from scientific investigation, but they had a second reason: "We are not ready" to tackle the problem of consciousness. They may be right about that, but my guess is that a lot of people in the early 1950s thought we were not ready to tackle the problem of the molecular basis of life and heredity. They were wrong; and I suggest for the current question, the best way to get ready to deal with a research problem may be to try to solve it.
There were, of course, famous earlier twentieth century exceptions to the general reluctance to deal with consciousness, and their work has been valuable. I am thinking in particular of the work of Sir Arthur Sherrington, Roger Sperry, and Sir John Eccles.
Whatever was the case 20 years ago, today many serious researchers are attempting to tackle the problem. Among neuroscientists who have written recent books about consciousness are Cotterill (1998), Crick (1994), Damasio (1999), Edelman (1989, 1992), Freeman (1995), Gazzaniga (1988), Greenfield (1995), Hobson (1999), Libet (1993), and Weiskrantz (1997). As far as I can tell, the race to solve the problem of consciousness is already on. My aim here is not to try to survey this literature but to characterize some of the neurobiological problems of consciousness from a philosophical point of view.
Consciousness as a Biological Problem
What exactly is the neurobiological problem of consciousness? The problem, in its crudest terms, is this: How exactly do brain processes cause conscious states and how exactly are those states realized in brain structures? So stated, this problem naturally breaks down into a number of smaller but still large problems: What exactly are the neurobiological correlates of conscious states (NCC), and which of those correlates are actually causally responsible for the production of consciousness? What are the principles according to which biological phenomena such as neuron firings can bring about subjective states of sentience or awareness? How do those principles relate to the already well understood principles of biology? Can we explain consciousness with the existing theoretical apparatus or do we need some revolutionary new theoretical concepts to explain it? Is consciousness localized in certain regions of the brain or is it a global phenomenon? If it is confined to certain regions, which ones? Is it correlated with specific anatomical features, such as specific types of neurons, or is it to be explained functionally with a variety of anatomical correlates? What is the right level for explaining consciousness? Is it the level of neurons and synapses, as most researchers seem to think, or do we have to go to higher functional levels such as neuronal maps (Edelman 1989, 1992), or whole clouds of neurons (Freeman 1995), or are all of these levels much too high and we have to go below the level of neurons and synapses to the level of the microtubules (Penrose 1994; Hameroff 1998a, b)? Or do we have to think much more globally in terms of Fourier transforms and holography (Pribram 1976, 1991, 1999)?
As stated, this cluster of problems sounds similar to any other such set of problems in biology or in the sciences in general. It sounds like the problem concerning microorganisms: How, exactly, do they cause disease symptoms and how are those symptoms manifested in patients? Or the problem in genetics: By what mechanisms exactly does the genetic structure of the zygote produce the phenotypical traits of the mature organism? In the end I think that is the right way to think of the problem of consciousness?it is a biological problem like any other, because consciousness is a biological phenomenon in exactly the same sense as digestion, growth, or photosynthesis. But unlike other problems in biology, there is a persistent series of philosophical problems that surround the problem of consciousness and before addressing some current research I would like to address some of these problems.
Identifying the Target: The Definition of Consciousness
One often hears it said that "consciousness" is frightfully hard to define. But if we are talking about a definition in common-sense terms, sufficient to identify the target of the investigation, as opposed to a precise scientific definition of the sort that typically comes at the end of a scientific investigation, then the word does not seem to me hard to define. Here is the definition: Consciousness consists of inner, qualitative, subjective states and processes of sentience or awareness. Consciousness, so defined, begins when we wake in the morning from a dreamless sleep and continues until we fall asleep again, die, go into a coma, or otherwise become "unconscious." It includes all of the enormous variety of the awareness that we think of as characteristic of our waking life. It includes everything from feeling a pain, to perceiving objects visually, to states of anxiety and depression, to working out crossword puzzles, playing chess, trying to remember your aunt?s phone number, arguing about politics, or to just wishing you were somewhere else. Dreams on this definition are a form of consciousness, though of course they are in many respects quite different from waking consciousness.
This definition is not universally accepted and the word consciousness is used in a variety of other ways. Some authors use the word to refer only to states of self-consciousness, i.e. the consciousness that humans and some primates have of themselves as agents. Some use it to refer to the second-order mental states about other mental states; so according to this definition, a pain would not be a conscious state, but worrying about a pain would be a conscious state. Some use "consciousness" behavioristically to refer to any form of complex intelligent behavior. It is, of course, open to anyone to use any word anyway he likes, and we can always redefine consciousness as a technical term. Nonetheless, there is a genuine phenomenon of consciousness in the ordinary sense, however we choose to name it; and it is that phenomenon that I am trying to identify now, because I believe it is the proper target of the investigation.
Consciousness has distinctive features that we need to explain. Because I believe that some, not all, of the problems of consciousness are going to have a neurobiological solution, what follows is a shopping list of what a neurobiological account of consciousness should explain.
The Essential Feature of Consciousness: The Combination of Qualitativeness, Subjectivity, and Unity
Consciousness has three aspects that make it different from other biological phenomena, and indeed different from other phenomena in the natural world. These three aspects are qualitativeness, subjectivity, and unity. I used to think that for investigative purposes we could treat them as three distinct features, but because they are logically interrelated, I now think it best to treat them together, as different aspects of the same feature. They are not separate because the first implies the second, and the second implies the third. I discuss them in order.
Every conscious state has a certain qualitative feel to it, and you can see this clearly if you consider examples. The experience of tasting beer is very different from hearing Beethoven?s Ninth Symphony, and both of those have a different qualitative character from smelling a rose or seeing a sunset. These examples illustrate the different qualitative features of conscious experiences. One way to put this point is to say that for every conscious experience there is something that it feels like, or something that it is like, to have that conscious experience. Nagel (1974) made this point over two decades ago when he pointed out that if bats are conscious, then there is something that "it is like" to be a bat. This distinguishes consciousness from other features of the world, because in this sense, for a nonconscious entity such as a car or a brick there is nothing that "it is like" to be that entity. Some philosophers describe this feature of consciousness with the word qualia, and they say there is a special problem of qualia. I am reluctant to adopt this usage because it seems to imply that there are two separate problems, the problem of consciousness and the problem of qualia. But as I understand these terms, "qualia" is just a plural name for conscious states. Because "consciousness" and "qualia" are coextensive, there seems no point in introducing a special term. Some people think that qualia are characteristic of only perceptual experiences, such as seeing colors and having sensations such as pains, but that there is no qualitative character to thinking. As I understand these terms, that is wrong. Even conscious thinking has a qualitative feel to it. There is something it is like to think that two plus two equals four. There is no way to describe it except by saying that it is the character of thinking consciously "two plus two equals four." But if you believe there is no qualitative character to thinking that, then try to think the same thought in a language you do not know well. If I think in French, "deux et deux fait quatre," I find that it feels quite different. Or try thinking, more painfully, "two plus two equals one hundred eighty-seven." Once again, I think you will agree that these conscious thoughts have different characters. However, the point must be trivial; that is, whether or not conscious thoughts are qualia must follow from our definition of qualia. As I am using the term, thoughts definitely are qualia.
Conscious states exist only when they are experienced by some human or animal subject. In that sense, they are essentially subjective. I used to treat subjectivity and qualitativeness as distinct features, but it now seems to me that properly understood, qualitativeness implies subjectivity, because in order for there to be a qualitative feel to some event, there must be some subject that experiences the event. No subjectivity, no experience. Even if more than one subject experiences a similar phenomenon, say two people listening to the same concert, all the same, the qualitative experience can exist only as experienced by some subject or subjects. And even if the different token experiences are qualitatively identical, that is they all exemplify the same type, nonetheless each token experience can exist only if the subject of that experience has it. Because conscious states are subjective in this sense, they have what I call a first-person ontology, as opposed to the third-person ontology of mountains and molecules, which can exist even if no living creatures exist. Subjective conscious states have a first-person ontology ("ontology" here means mode of existence) because they exist only when they are experienced by some human or animal agent. They are experienced by some "I" that has the experience, and it is in that sense that they have a first-person ontology.
All conscious experiences at any given point in an agent?s life come as part of one unified conscious field. If I am sitting at my desk looking out the window, I do not just see the sky above and the brook below shrouded by the trees, and at the same time feel the pressure of my body against the chair, the shirt against my back, and the aftertaste of coffee in my mouth. Rather I experience all of these as part of a single unified conscious field. This unity of any state of qualitative subjectivity has important consequences for a scientific study of consciousness. I say more about them later on. At present I just want to call attention to the fact that the unity is already implicit in subjectivity and qualitativeness for the following reason: If you try to imagine that my conscious state is broken into 17 parts, what you imagine is not a single conscious subject with 17 different conscious states but rather 17 different centers of consciousness. A conscious state, in short, is by definition unified, and the unity will follow from the subjectivity and the qualitativeness because there is no way you could have subjectivity and qualitativeness except with that particular form of unity.
There are two areas of current research where the aspect of unity is especially important. These are, first, the study of the split-brain patients by Gazzaniga (1998) and others (Gazzaniga et al 1962, 1963). and, second, the study of the binding problem by a number of contemporary researchers. The interest of the split-brain patients is that both the anatomical and the behavioral evidence suggest that in these patients there are two centers of consciousness that after commissurotomy are communicating with each other only imperfectly. They seem to have, so to speak, two conscious minds inside one skull.
The interest of the binding problem is that it looks like this problem might give us in microcosm a way of studying the nature of consciousness because just as the visual system binds all of the different stimulus inputs into a single unified visual percept, so the entire brain somehow unites all of the variety of our different stimulus inputs into a single unified conscious experience. Several researchers have explored the role of synchronized neuron firings in the range of 40 Hz to account for the capacity of different perceptual systems to bind the diverse stimuli of anatomically distinct neurons into a single perceptual experience (Llinas 1990; Llinas & Pare 1991; Llinas & Ribary 1992, 1993; Singer 1993, 1995; Singer & Gray 1995). For example, in the case of vision, anatomically separate neurons specialized for such things as line, angle, and color all contribute to a single, unified, conscious visual experience of an object. Crick (1994) extended the proposal for the binding problem to a general hypothesis about the NCC. He put forward a tentative hypothesis that perhaps the NCC consists of synchronized neuron firings in the general range of 40 Hz in various networks in the thalamocortical system, specifically in connections between the thalamus and layers four and six of the cortex.
This kind of instantaneous unity has to be distinguished from the organized unification of conscious sequences that we get from short-term or iconic memory. For nonpathological forms of consciousness at least some memory is essential in order that the conscious sequence across time can come in an organized fashion. For example, when I speak a sentence, I have to be able to remember the beginning of the sentence at the time I get to the end if I am to produce coherent speech. Whereas instantaneous unity is essential to, and is part of, the definition of consciousness, organized unity across time is essential to the healthy functioning of the conscious organism, but it is not necessary for the very existence of conscious subjectivity.
This combined feature of qualitative, unified subjectivity is the essence of consciousness and it, more than anything else, is what makes consciousness different from other phenomena studied by the natural sciences. The problem is to explain how brain processes, which are objective third-person biological, chemical, and electrical processes, produce subjective states of feeling and thinking. How does the brain get us over the hump, so to speak, from events in the synaptic cleft and the ion channels to conscious thoughts and feelings? If you take seriously this combined feature as the target of explanation, I believe you get a different sort of research project from what is currently the most influential. Most neurobiologists take what I call the building block approach: Find the NCC for specific elements in the conscious field, such as the experience of color, and then construct the whole field out of such building blocks. Another approach, which I call the unified field approach, takes the research problem to be one of explaining how the brain produces a unified field of subjectivity to start with. On the unified field approach, there are no building blocks; rather there are just modifications of the already existing field of qualitative subjectivity. I say more about this later.
Some philosophers and neuroscientists think we can never have an explanation of subjectivity: We can never explain why warm things feel warm and red things look red. To these skeptics there is a simple answer: We know it happens. We know that brain processes cause all of our inner qualitative, subjective thoughts and feelings. Because we know that it happens, we ought to try to figure out how it happens. Perhaps in the end we will fail but we cannot assume the impossibility of success before we try.
Many philosophers and scientists also think that the subjectivity of conscious states makes it impossible to have a strict science of consciousness. For, they argue, if science is by definition objective, and consciousness is by definition subjective, it follows that there cannot be a science of consciousness. This argument is fallacious. It commits the fallacy of ambiguity over the terms objective and subjective. Here is the ambiguity: We need to distinguish two different senses of the objective-subjective distinction. In one sense, the epistemic sense ("epistemic" here means having to do with knowledge), science is indeed objective. Scientists seek truths that are equally accessible to any competent observer and that are independent of the feelings and attitudes of the experimenters in question. An example of an epistemically objective claim would be "Bill Clinton weighs 210 pounds." An example of an epistemically subjective claim would be "Bill Clinton is a good president." The first is objective because its truth or falsity is settleable in a way that is independent of the feelings and attitudes of the investigators. The second is subjective because it is not so settleable. But there is another sense of the objective-subjective distinction, and that is the ontological sense ("ontological" here means having to do with existence). Some entities, such as pains, tickles, and itches, have a subjective mode of existence, in the sense that they exist only as experienced by a conscious subject. Others, such as mountains, molecules, and tectonic plates, have an objective mode of existence, in the sense that their existence does not depend on any consciousness. The point of making this distinction is to call attention to the fact that the scientific requirement of epistemic objectivity does not preclude ontological subjectivity as a domain of investigation. There is no reason whatever why we cannot have an objective science of pain, even though pains only exist when they are felt by conscious agents. The ontological subjectivity of the feeling of pain does not preclude an epistemically objective science of pain. Though many philosophers and neuroscientists are reluctant to think of subjectivity as a proper domain of scientific investigation, in actual practice we work on it all the time. Any neurology textbook will contain extensive discussions of the etiology and treatment of such ontologically subjective states as pains and anxieties.
Some Other Features
To keep this list short, I mention some other features of consciousness only briefly.
Feature 2: Intentionality
Most important, conscious states typically have "intentionality," that property of mental states by which they are directed at or about objects and states of affairs in the world. Philosophers use the word intentionality not just for "intending" in the ordinary sense but for any mental phenomena at all that have referential content. According to this usage, beliefs, hopes, intentions, fears, desires, and perceptions all are intentional. So if I have a belief, I must have a belief about something. If I have a normal visual experience, it must seem to me that I am actually seeing something, etc. Not all conscious states are intentional and not all intentionality is conscious; for example, undirected anxiety lacks intentionality, and the beliefs a man has even when he is asleep lack consciousness then and there. But I think it is obvious that many of the important evolutionary functions of consciousness are intentional: For example, an animal has conscious feelings of hunger and thirst, engages in conscious perceptual discriminations, embarks on conscious intentional actions, and consciously recognizes both friend and foe. All of these are conscious intentional phenomena and all are essential for biological survival. A general neurobiological account of consciousness will explain the intentionality of conscious states. For example, an account of color vision will naturally explain the capacity of agents to make color discriminations.
Feature 3: The Distinction Between the Center and Periphery of Attention
It is a remarkable fact that within my conscious field at any given time I can shift my attention at will from one aspect to another. So, for example, right now I am not paying any attention to the pressure of the shoes on my feet or the feeling of the shirt on my neck. But I can shift my attention to them any time I want. There is already a fair amount of useful work done on attention.
Feature 4: All Human Conscious Experiences Are in Some Mood or Other
There is always a certain flavor to one?s conscious states, always an answer to the question "How are you feeling?". The moods do not necessarily have names. Right now I am not especially elated or annoyed, not ecstatic or depressed, not even just blah. But all the same I will become acutely aware of my mood if there is a dramatic change, if I receive some extremely good or bad news, for example. Moods are not the same as emotions, though the mood we are in will predispose us to having certain emotions.
We are, by the way, closer to having pharmacological control of moods with such drugs as Prozac than we are to having control of other internal features of consciousness.
Feature 5: All Conscious States Come to Us in the Pleasure/ Unpleasure Dimension
For any total conscious experience there is always an answer to the question of whether it was pleasant, painful, unpleasant, neutral, etc. The pleasure/unpleasure feature is not the same as mood, though of course some moods are more pleasant than others.
Feature 6: Gestalt Structure
The brain has a remarkable capacity to organize very degenerate perceptual stimuli into coherent conscious perceptual forms. I can, for example, recognize a face, or a car, on the basis of very limited stimuli. The best known examples of Gestalt structures come from the researches of the Gestalt psychologists.
Feature 7: Familiarity
There is in varying degrees a sense of familiarity that pervades our conscious experiences. Even if I see a house I have never seen before, I still recognize it as a house; it is of a form and structure that is familiar to me. Surrealist painters try to break this sense of the familiarity and ordinariness of our experiences, but even in surrealist paintings the drooping watch still looks like a watch, and the threeheaded dog still looks like a dog.
One could continue this list, and I have done so in other writings (Searle 1992). The point now is to get a minimal shopping list of the features that we want a neurobiology of consciousness to explain. In order to look for a causal explanation, we need to know what the effects are that need explanation. Before examining some current research projects, we need to clear more of the ground.
The Traditional Mind-Body Problem and How to Avoid It
The confusion about objectivity and subjectivity I mentioned earlier is just the tip of the iceberg of the traditional mind-body problem. Though ideally I think scientists would be better off if they ignored this problem, the fact is that they are as much victims of the philosophical traditions as anyone else, and many scientists, like many philosophers, are still in the grip of the traditional categories of mind and body, mental and physical, dualism and materialism, etc. This is not the place for a detailed discussion of the mind-body problem, but I need to say a few words about it so that, in the discussion that follows, we can avoid the confusions it has engendered.
The simplest form of the mind-body problem is this: What exactly is the relation of consciousness to the brain? There are two parts to this problem, a philosophical part and a scientific part. I have already been assuming a simple solution to the philosophical part. The solution, I believe, is consistent with everything we know about biology and about how the world works. It is this: Consciousness and other sorts of mental phenomena are caused by neurobiological processes in the brain, and they are realized in the structure of the brain. In a word, the conscious mind is caused by brain processes and is itself a higher level feature of the brain.
The philosophical part is relatively easy but the scientific part is much harder. How, exactly, do brain processes cause consciousness and how, exactly, is consciousness realized in the brain? I want to be very clear about the philosophical part because it is not possible to approach the scientific question intelligently if the philosophical issues are unclear. Notice two features of the philosophical solution. First, the relationship of brain mechanisms to consciousness is one of causation. Processes in the brain cause our conscious experiences. Second, this does not force us to any kind of dualism because the form of causation is bottomup, and the resulting effect is simply a higher-level feature of the brain itself, not a separate substance. Consciousness is not like some fluid squirted out by the brain. A conscious state is rather a state that the brain is in. Just as water can be in a liquid or solid state without liquidity and solidity being separate substances, so consciousness is a state that the brain is in without consciousness being a separate substance.
Notice that I stated the philosophical solution without using any of the traditional categories of "dualism," "monism," "materialism," and all the rest of it. Frankly, I think those categories are obsolete. But if we accept those categories at face value, then we get the following picture: You have a choice between dualism and materialism. According to dualism, consciousness and other mental phenomena exist in a different ontological realm altogether from the ordinary physical world of physics, chemistry, and biology. According to materialism, consciousness as I have described it does not exist. Neither dualism nor materialism, as traditionally construed, allows us to get an answer to our question. Dualism says that there are two kinds of phenomena in the world, the mental and the physical; materialism says that there is only one, the material. Dualism ends up with an impossible bifurcation of reality into two separate categories and thus makes it impossible to explain the relation between the mental and the physical. But materialism ends up denying the existence of any irreducible subjective qualitative states of sentience or awareness. In short, dualism makes the problem insoluble; materialism denies the existence of any phenomenon to study, and hence of any problem.
On the view that I am proposing, we should reject those categories altogether. We know enough about how the world works to know that consciousness is a biological phenomenon caused by brain processes and realized in the structure of the brain. It is irreducible not because it is ineffable or mysterious, but because it has a first-person ontology and therefore cannot be reduced to phenomena with a third-person ontology. The traditional mistake that people have made in both science and philosophy has been to suppose that if we reject dualism, as I believe we must, then we have to embrace materialism. But on the view that I am putting forward, materialism is just as confused as dualism because it denies the existence of ontologically subjective consciousness in the first place. Just to give it a name, the resulting view that denies both dualism and materialism I call biological naturalism.
May 04, 2003, 02:50 PM
Group: Basic Member
Joined: Apr 26, 2003
Member No.: 461
Funny, I was going to say, Kimosabe, the response Tonto gave the Lone Ranger, but decided not to because I didn't think I could spell it correctly. The white man portion came out of an old joke about the pair that I substituted in its stead. So much for things of the mind and the translation to the written word.
No, I read between the lines of your message, the feel of it as good natured as I attempted to be in return.
To answer your other question is the stuff of what I am trying to explain to Shawn on the 'Who am I' string.
Decartes took us down a very contorted path over the last ~three centuries when he discovered the idea that his very act of thinking really described who he was-"I think therefore I am". This began the discussion of where consciousness resides that we have not solved today. The Royal Court at the time then asked the question: Where is the ghost in the "machine."
and the battle was on. A battle confused by the domanance of Christianity in that age that could get you whacked for saying the wrong thing and insisted that any discussion result in a final conclusion resulting into God be effectively in the picture. My contention is they are two vehicles: The brain and its companion the mind.
The brain is hovered over and attended by very detail oriented scientists wh say their vehicle is the logical one of choice. And the mind, attended to some very funny collection of peoples of different pursuasions, whom I shall not name at this time. The playing field is wide open and I see the dilemma being solved by sweeping away the old thought processes and inventing some new ground. That is my playground.
|Lo-Fi Version||Time is now: 19th June 2013 - 11:13 AM|