User Tools

Site Tools


analysis:mind:representational_opacity

Representational Opacity

Representational opacity is the idea that the workings of the unconscious are necessarily invisible to conscious awareness because the way the unconscious mind works is fundamentally different. The conscious mind is largely verbal (see The Interpreter Theory), so its workings are necessarily discrete, symbolic (or qualitative) and somewhat logical, whereas the unconscious is quantitative, approximate, profusely connected and semantically ambiguous.

Artificial Intelligence

Artificial Intelligence (AI) research has succeeded in creating usefully human-like cognitive abilities in areas such as speech recognition. More interesting for our purposes is what these efforts have revealed about the mind. In our view, the most important result has been a correction of misunderstandings about the mind which had resulted from the limitations of introspection.

Broadly, one of the puzzles raised by AI research is that many things that seem easy to humans (such as recognizing an object in a picture) turn out to be quite difficult, and other things that seem hard (like logic and chess) turn out to be relatively straightforward. From the beginnings of AI in the 1950's, AI researchers over and over again predicted rapid progress on specific areas such as natural language understanding, but failed to deliver. In contrast, progress was rapid in solving well-defined problems with a logical structure, especially games such as chess, where computers soon rivaled all but the very best humans.

Significant progress in more fuzzy areas such as Natural Language Understanding and object recognition began only when (in the 1990's) researchers started to abandon logical symbolic AI in favor of sub-symbolic approaches that emphasized quantitative (continuous or relative) aspects of situations, often using statistical formalisms or models that roughly approximated what was known about brain structure.

In fairness to early AI researchers, these methods were only made practical by the approximately million-fold increase in computer power from 1955 to 1995, but AI researchers had also been blinded by the user illusion presented by the human mind. We fall for the compelling intuition that the important action in the mind is conscious, failing to appreciate that what is unconscious we uh, well … don't know about. They can't be faulted for this misunderstanding, because at the time the only available model of the unconscious was the dark primitive Freudian version.

What Does this Tell Us?

The difficulties of AI didn't arise primarily from failures in implementing the methods the researchers came up with, but in the failure to understand how natural intelligence actually works. For crucial capabilities such as Visual Perception, evolution had already found a solution that was approximate, ad-hoc, complex, brute-force, Massively Parallel, analog, and rather error-prone. The evolved solution was inelegant, but it was reasonably thrifty in its brain requirements, and (most of all) it was fast. This was important for keeping our rodent-like ancestors from being stepped on by dinosaurs.

Our mammal ancestors had to solve many of the same problems that we do, including locating the appropriate environment, avoiding hazards, getting food, finding suitable mates, and offering some degree of parental care. This they did without any need to offer a Story about what they were doing. No need for story, so no need for consciousness, symbols or logic. Doing just fine without them, thankyouverymuch.

These animals weren't unconscious in the sense of being knocked out, but they were unconscious in the sense of not continuously observing what they were doing and coming up with explanations for it in terms of their (socially acceptable) intentions.

When the engine of Genetic-Cultural Coevolution got started, and it was time for a verbal capacity that was capable of generating and evaluating explanations, evolution didn't rebuild the mind from scratch, giving it a new architecture with logico-symbolic pattern detection and decision making. Instead, the design philosophy was, well… evolutionary. In other words, a hack. A new facility was designed with the needed behavior, but all that old stuff was left pretty much as it was. Some new connections were added so that our existing intuitive understandings could guide our stories for explaining those intuitions, and other new connections could inhibit behavior when our inner interpreter couldn't find an acceptable justification.

Language is valuable both as a means of social coordination: “You go this way and I'll go that way…” and for cultural information transmission “Ya see, you push down here, and Pow!”. It's hard to know what the relative adaptive values of these forms of communication were to our ancestors in different eras, but there are a number of puzzling quirks of our consciousness and associated verbal abilities which suggest that effective persuasion (social coordination) has been weighed rather heavily, so that unbiased reasoning and impartial communication are much harder than telling a Story from your own perspective (see The Argumentative Theory).

This after-the-fact architecture of consciousness creates a highly useful disconnect between the stories that we use to explain what we've done and our actual processes of motivation and judgment that generated those behaviors. Although most of the time our stories are a good-faith effort to distill some communicable essence out of what we know and understand, the disconnect means that our communications can become biased and even downright deceptive without us forming any conscious intention to deceive. This is one of the areas where the tension of individual/group conflict is subtly expressed.

analysis/mind/representational_opacity.txt · Last modified: 2013/10/12 13:40 by ram