User Tools

Site Tools


books:out_of_our_heads

Out Of Our Heads

Why You Are Not Your Brain, and 
Other Lessons from the Biology 
of Consciousness
Alva Noë

failed to fetch data: unkown error

The author is well informed about understandings of the mind offered by neuroscience and psychology, and is frequently delightfully plain spoken. Of course he cites evidence in a way that advances his Story, and this is the nature of narrative. He does no great violence to the science, but consistently exaggerates the difference between his own views and those common in these areas of science. Unfortunately, he fails to make a connection between his detailed arguments and his overall claim about mind, and indeed fails to communicate with any clarity at all what his view of the mind is.

In this book I use the term “consciousness” to mean, roughly, experience. And I think of experience, broadly, as encompassing thinking, feeling, and the fact that a world “shows up” for us in perception.

… I argue that computers can't think largely for the same reason that brains can't. Meaningful thought only arises for the whole animal dynamically engaged with its environment. … The taste of licorice is not something that happens in our brains.

It seems that he also uses “consciousness” to mean “you”, “your self”, “your being”, etc. As he notes, this is broad. In The Feeling of What Happens, Antonio Damasio attempts to pin down some of the many meanings of consciousness, some of which apply to other animals, some perhaps only to humans. I tend to equate the uniquely human aspects of consciousness with storytelling (see Level Map), but I try to avoid “consciousness” because of its ambiguity and its somewhat mystical philosophical connotations.

Conscious Life

Our commitment to the consciousness of others is, rather, a presupposition of the kind of life we lead together. In this respect, the young child, in her relationship to the caretaker, is really the paradigm. … The child has no theoretical distance … The child does not wonder whether Mommy is animate

This is certainly the best answer to the philosophical program of radical doubt begun by Descartes. It is our nature to accept that there is a real external world. A crucial feature of this world is that it has other people in it who have minds of their own. Does Noë think that neuroscientists or psychologists engage in or recommend radical doubt?

Descartes and Kant did advance understanding of the human condition by pointing out how our perception of reality is indirect and fallible. Neuro and behavioral science have done a great deal to expand our understanding of this indirectness and fallibility, but as scientists we have an ideological commitment to the idea of a knowable external reality. From a Pragmatic perspective, we can regard this either as an act of faith or a productive hypothesis. In neither case does the possibility that we might perceive something unreal (an illusion) cause us to adopt radical doubt.

The Dynamics of Consciousness

The author discusses an experiment where newborn ferrets were given surgery routing their eyes to parts of the brain usually used for auditory processing; The ferrets still did develop vision.

The fact that it is possible in this way to vary consciousness in relation to its neural underpinnings teaches that there isn't anything special about the cells in the so-called visual cortex that makes them visual. … There is no necessary connection between the character of experience and the behavior of certain cells.

Compare to this presentation of the same research:

After a ferret or human is born, cells in the brain's primary visual area become highly specialized for analyzing the orientation of lines found in images or shapes. Some cells fire only in response to vertical lines. If presented with a horizontal or slanted line, they don't do anything.

Other cells fire exclusively when a horizontal line falls on them and yet others fire in response to lines slanted at various angles. These specialized cells are draped across the primary visual area in a somewhat splotchy fashion that resembles a bunch of pinwheels.

The hearing region of the brain is organized very differently, Dr. Sur said. Each cell is connected to the next in a kind of single line. There are no pinwheel shapes.

After the rewired ferrets matured, researchers looked at the auditory region of their brains and found that cells were organized pinwheel fashion. They found horizontal connections between cells responding to similar orientations. The rewired map was less orderly than the maps found in normal visual cortex, Dr. Sur said, but looked as if it might be functional.

So at some developmental stages it can be possible for neurons to develop into either an auditory-processing or visual processing arrangement. This is not the same as showing a disconnect between the behavior of cells and the character of experience. We could equally well say that these ferrets developed their visual cortex in an unusual place.

More broadly, it is a valid point that statements about the destined function of a neuron are based on an assumption of normal development in a normal environment. I'm not sure who first made this observation, but there isn't enough information in our genes to encode the detailed structure of the brain. As with with every other aspect of our growth, genes encode a developmental program rather than a blueprint. If you intervene, you can make interestingly different things happen, and learn about the bounds of developmental plasticity.

Habits

Traditional approaches to the mind in cognitive sciences have failed to appreciate the importance of habit, for they start from the assumption that the really interesting thing about us human beings is that we are very smart. We are deliberators, we are propositional, we use reason. We perceive, we evaluate, we plan, we act. We are the rational animal. … According to this intellectualist conception, we are habit-free.

It is good to hear that philosophers no longer believe that conscious reasoning is the be-all and end-all of existence. Perhaps early cognitive scientists and AI researchers believed this too, from around 1950 up into the 70's. I would say it wasn't so much that they believed nothing else was going on in the mind, but they thought the reasoning, the chess playing, theorem proving, and so on, was the hard part, and that perception, natural language, intuition, and other prevalent aspects of human experience were trivial.

Largely as a consequence of the efforts in Artificial Intelligence and robotics to duplicate these capacities, but also informed by studies of learning (such as Tacit Knowledge), we now understand that most of the action of mind is going on unconsciously.

“Habit” is certainly a humble commonsense word, far less intimidating that Dual Process Theory, but this use of different terminology exaggerates the difference between Noë's own view and that of cognitive science today. For example, Daniel Willingham, a prominent cognitive psychologist of education says that: “The brain is not made for thinking.” He says that people are bad at thinking, and avoid it if at all possible.

Neuroscientist and anthropologist Terrence Deacon has made an important observation that bears on the approach to language taken by linguists working in the Chomskyian (Cartesian, intellectualist) tradition. According to that Chomskyian tradition, language–even moreso than chess–presents us with a daunting computational problem. The young child must figure out how to use language in the course of just a few years and on the basis of what is widely asserted to be poor linguistic data, owing to the performance errors of everyday speech. Somehow, on this basis, the child figures out the rules that demarcate the domain of the grammatical and the ungrammatical. This is an achievement one would attribute to great scientific minds, yet is accomplished by every normal child in the world.

Deacon offers a very different suggestion. Language, Deacon points out, is like a graphical user interface … There's no mystery why we find it so easy to use the GUI systems. They were designed by us to be used by us. … Language may be an immensely complicated symbolic system, but … it doesn't just happen to be that we can figure out (miraculously!) how to use it. We ourselves built language–collectively, over thousands of years–precisely to be a way of collaborating and communicating that is easy for us.

I do greatly favor The User Interface Analogy as a way to understand why complex automated mental processes seem straightforward. Yes, our brains and language coevolved. The author presents this as a disagreement, but it isn't entirely clear where he feels the disagreement lies. Perhaps he disagrees with Chomsky that we have special mental apparatus for processing language? I don't think he believes that nothing related to language understanding takes place in the brain at all, though some of his later more extreme claims could be consistent with that.

… There is a neural structure, or rather, a place, in the fusiform gyrus (now known as the fusiform face area, or FFA) that is strongly activated not only when subjects see faces but when they think about them or imagine faces. The FFA would seem to be the place in the brain where faces are represented in consciousness. Further evidence for this is the fact that damage to the FFA would appear to produce face-specific visual deficits. This unusual disorder, Prospagnosia, is characterized not so much by an inability to pick out faces as by an inability to recognize individual faces.

Nancy Kanwisher, who has championed this proposal, argues that work on the face-recognition system lends support to the idea that the human brain is like a Swiss Army knife, “composed of special-purpose components, each tailored to solve a single specific task.” The face-detection module, Kanwisher holds, is a domain-specific system for “representing the contents of conscious awareness.”

Now, this innate face-detection module hypothesis–let's call it the Swiss Army knife theory–has been hugely influential in cognitive neuroscience; it stands as a kindn of paradigm of what the new neuroscience of perception and consciousness can achieve. Is it plausible? I think not.

While there are researchers looking for “neural correlates of consciousness”, we don't yet know which (if any) parts of the brain are specialized to consciousness. So such theories are speculative, as well as semantically slippery. Let's consider just the module theory and its compatibility with functional imaging research. Even before modern imaging, it was clear from neurology that certain brain areas were very important for certain cognitive functions (spoken language, etc.), yet damage in those areas would spare many other functions. Often there would be distinctive qualities in the arrangement of neurons in those areas too, as in the visual cortex, above. Yes, the functional anatomy varies somewhat between individuals. Yes, people can often recover some of the function lost due to brain damage by recruiting other areas, often nearby.

I feel we are justified in saying that these brain areas are “specialized for” that function, and that we can adopt the evolutionary design stance to say that the brain area was “designed for” that purpose.

There is just this sort of evidence that the FFA is particularly important in face recognition. Does this make it a “module”? That's less clear. The FFA can't recognize faces without the rest of the brain, without the eyes, and without the rest of the body's support systems. Further, as the author argues, the FFA may be important for things beside face recognition. But as highly social animals, face recognition is clearly very important to humans, so evolutionary psychology certainly has embraced such suggestions that we have specific adaptations to face recognition.

A key characteristic of the engineering conception of a module is that it has a deliberately pared-down interface which reduces undesired interactions with the rest of the system. Simply showing that a function is localized in an area isn't sufficient to prove that it is a module in this sense. Evolution may have taken an area that was initially developmentally undifferentiated, lacking the kinds of clear boundaries that exist between organs or between the cerebellum and the cerebrum in the brain, and optimized a function to be localized in that area simply to reduce the wiring cost of implementing that function. Evolutionary Psychology does tend to assume that mental functions genetically behave like modules, where a function can be added with little interference to existing functions, but any genetic modularity that does exist need not correspond directly to an anatomic modularity in the brain.

I think there is a valid point here that we should be careful of how the results of brain functional mapping are being interpreted in popular perception, because these results are based on small changes in the activity of brain regions between different tasks, and detecting these changes requires a considerable amount of inference using sophisticated statistical techniques. The brain is never “doing nothing” (see Default Mode Network), so we have to be careful about what control conditions we use and about how much we read into these small changes in activity. If activity increases in a region, then it is reasonable to suppose that the region does something relevant to the task, but it is another thing entirely to say that it is “made for that task”.

In fact, the claim that our sensitivity to faces is unaffected by experience is also not one that we can really take seriously.

Agreed. But then, who claims that? While it's not mentioned in this book, there's evidence that newborns recognize face-like patterns and prefer to look at them. So likely there is an innate preparedness to be attuned to faces, but this isn't the same as saying that we are “unaffected by experience.”

What we can reject outright, though–we should know better–is that we can make intelligible the idea that a particular bit of brain tissue (such as the FFA) can really be the source of faces in our consciousness.

That does remind us of what the author is trying to show, that consciousness lies outside the brain, but it isn't clear how he makes his connection.

It would be a mistake to think that these findings in the mathematics of computability–or that achievements in the domain of computer engineering–prove that our brains are, in effect, computers. For this claim is founded on a mistake. No computer actually performs a calculation, not even a simple one. … Crucially, understanding a problem or a computation does not consistent in merely following a rule blindly. … Computers may generate an answer, but insofar as they do so by following rules blindly, the do so with no understanding.

But more important, computers don't even follow rules blindly. They don't follow recipes. Just as a wristwatch doesn't know what time it is even though we use it to keep track of the time, so the computer doesn't understand the operations that we perform with it.

The brain is not a computer, but it more closely resembles a computer than it resembles anything else (besides another brain). Any computer user can appreciate that computers are quite capable of behaving in surprising and unpredictable ways, despite their “mechanical” workings. It's productive comparing the brain to a computer because we do understand how the computer works, and yet can also see that it shows mind-like capacities. See The User Interface Analogy and Reprogramming the Mind.

When the author says that “No computer actually performs a calculation”, he must be speaking in a special language of the philosophy room. Clearly he feels that meaning and understanding are crucial, and I would agree that meaning comes from some larger context where there is another agent, an assigner-of-meaning who is evaluating the behavior of the thing in question. The alternative is to say that the meaning is in the thing itself, which is inconsistent with the understanding that meaning must be relative. To the owner, the computer may be a great way to keep up with his friends, to watch movies, and so on, while to the cockroach it is a nice source of warmth and concealment.

The Chinese Room

The issue of understanding brings us to the Chinese room. Around 1985, when I was working at the periphery of AI research (on the Lisp programming language), John Searle came to CMU and gave a talk about his critique of AI and presented his Chinese Room thought experiment. (Listen to Daniel Dennett on The Chinese Room.) At the time, my conclusion was that Searle “just didn't get it” that a computer with a program was qualitatively different from a person following instructions in a book (though there is a formal equivalence). Also, I “didn't get it” why he thought that the internal workings were so important. If, without opening the door, the room convinces us that we are understood, then I didn't have a problem with saying the room understood. This thought experiment also wouldn't actually work (see Chinese Room), which puts the validity of the whole argument into doubt.

Searle clearly has strong intuitions about what constitutes “real understanding”, “real meaning”, and so on. This was peripheral to the concerns of the AI community, because our interest was in creating systems that behaved in intelligent ways. We'd be satisfied to create the Chinese room. However, many of us were pretty darned certain that what Searle had really proved is that there is no “real understanding” in the human mind, at least as he defined it.

In this book, the author admits to extending Searle's argument the next logical step–brains can't think either. With advances in neuroscience and non-symbolic AI, I'd guess it was becoming less tenable to argue that the brain had some special structure or properties that made it uniquely capable of “real” understanding, experiencing, etc.

What Consciousness Isn't

Unfortunately, the author is much clearer about the great many claims and views that he feels are false or unsupported than he is about what he himself actually thinks. What does he really mean when he says that consciousness is a property of a person in the world rather than of an isolated brain? Does he believe, as Indian mystics do, that the universe is consciousness itself, and we are just a small spark of that? My guess is no, he hasn't given any sign of tending toward non-physical explanations.

Does he mean, since he has defined consciousness as genuine experience of the world, that consciousness can only exist in the setting of person genuinely experiencing the world? Possibly, but that does seem trivially circular.

My suspicion is that the author has had vivid subjective experiences of the sort that Saul Bellow talks about in this quotation at the beginning of chapter 8:

Once in a while, I get shocked into upper wakefulness, I turn a corner, see the ocean, and my heart tips over with happiness–it feels so free! Then I have the idea that, as well as beholding, I can also be beheld from yonder and am not a discrete object but incorporated with the rest, with universal sapphire, purplish blue. For what is this sea, this atmosphere, doing within the eight-inch diameter of your skull? (I say nothing of the sun and the galaxy which are also there.) At the center of the beholder there must be space for the whole, and this nothing-space is not an empty nothing but a nothing reserved for everything.

Perhaps Noë finds he cannot believe that this sort of experience is in any nontrivial way a consequence of synapses firing. In pursuing this insight, but still trying to use the rational methods of science, he has gone way out on a limb.

If this is so, then he is wise in not trying to pin himself down with words. Though quite likely the author would be offended by this claim, his methods are the methods of the guru, trying to infuse his understanding into the minds of his students by negation, analogy, and any other indirect technique, because mystical knowledge cannot withstand a frontal verbal assault. It would melt away into nothingness. Classifying such experiences as mystical is not the same as saying they are unreal (or illusions).

Should we say that the author has fallen for the illusion of consciousness? I agree with him in objecting to the term “illusion” as a description of the constructive, synthetic way that the mind understands experience. He seems to feel that he is understanding the world just fine, and if anything is bothering him, it is his frustration that almost everyone else who follows a physical approach is “out of their head” [or mind] in not seeing things as he does. If he wants to understand this disagreement, then I think he should spend more time pondering Level Confusion and Emergence.

Conclusion

Overall, I'd have to say I was disappointed. On first reading, I had noticed mostly where I agreed with his position, which was often, but on going back and rereading the areas I had tagged, I found that when he was making claims related to science, there was a pattern. He chose a starting point where it wasn't clear who actually held this position, and then ended up at a position that was entirely consistent both with science and with mind being a process that takes place within the brain. In other words, there was the form of a straw man argument. I also noticed considerable rhetorical spin that hadn't registered on my first reading.

It seems that this book wasn't written for me, but rather for people who have similar intuitions to the author: that there is “something more” than the mechanistic model of neuroscience offers. If you want an articulate confirmation of that sort of view, read this book.

You have to understand that I believe the normal function of Story is communicating our intuitions, so there is nothing improper with starting from an intuition and building a story around it. But in this case, I didn't get the author's intuition, and I saw very few places where he made a clear attempt to communicate what he did believe. When he addresses the central issue what he says either seems pretty obvious, or could only be true given some specialized use of language.

books/out_of_our_heads.txt · Last modified: 2015/02/13 07:39 by ram