The word is EPTIFY. Don’t look in the dictionary. It’s too new for the dictionary. But you’d better learn what it implies.
EPTIFY. We do it to you.
The idea of mind as computer leads to the obvious question: why can't we reprogram the mind? The reprogramming metaphor comes in two major flavors: reprogramming someone else (Mind control), and reprogramming yourself (presumably for greater happiness or success.) Outside of science fiction this proves challenging because of several fundamental difficulties.
Direct approach: Reprogramming the mind by talking to someone (possibly yourself) is like trying to reprogram a mall information kiosk through the touch screen (see The User Interface Analogy.) Perhaps we can also exploit other seemingly intended interfaces such as our appreciation of music, our spiritual sensibilities and yearning for social connection, and the semi-articulate symbolic languages of art, poetry and myth.
Interface hacks: Another possibility is that we can hack the system by exploiting a seemingly accidental interface, a bug or misfeature that is ordinarily mostly harmless. Meditation, and perhaps Hypnosis fall into this category. Meditation somewhat resembles a Denial-of-service attack. This fades over into the next category with body work such as massage, yoga or Tai Chi, and tricks such as sweat lodges and fasting.
Hardware hacks: The third major thread in thought on reprogramming is that we should get out our screwdriver and open up the back of the mall kiosk. That is, we may need a hardware intervention such as drugs or genetic engineering. Though mainstream psychopharmacology carefully distances itself from such radical views, the goals and the means are basically the same.
Given the limitations and side-effects of all these approaches, the comprehensive effort at reprogramming the mind should make use of all three. This is the trinity advocated in The Happiness Hypothesis: cognitive therapy, meditation and antidepressants.
Most discussion of mental reprogramming concentrates on the “how to” above, perhaps operating under an assumption that with the right techniques, based on understanding of how the mind works, it will all become straightforward — that we will be able to feel and act more or less however we please, and (more ominously) our controllers could “do it to us.” In reality, there are several serious obstacles faced by would-be mind programmers.
What if the only problem with programming the mind is that we don't know how? In the computer analogy, the mind's interface definitely is undocumented; this is a serious, but ultimately surmountable obstacle. This difficulty is more serious than the computer analogy implies, because (unlike any software) the structure of the mind is evolved, so need not make any sort of sense in a human way. As Marvin Minsky noted, “The brain is a hack.” Yet there are several other intractable challenges in mind programming that arise from the mind's design, some accidental and some intentional.
The mind was not designed to run multiple programs, with a computer's easy context switching, so it has neither suitable high-bandwidth interfaces nor the necessary internal signal paths to directly modify the details that would need to be changed. Some behavior is hardwired (though the extent of this is a subject of heated debate, see Nature Versus Nurture), and some important things also seem to be difficult to alter (write once), such as fear learning in the amygdala and the neural remodellings that happen during developmental critical periods.
Metaphor notwithstanding, the brain lacks a precise analog of software. There is a clear distinction between the hardware of the brain and the nonphysical stuff that it represents, but the precise engineering analogy here is between signal processor and signal (see Mind.) Even a worm can adapt its behavior in response to the environment, but learning is not the same as programming. Minds are designed to learn, but they are not designed to be (easily) reprogrammable.
Consider the mall kiosk again. The designer could easily have put a USB port below the screen and added an “advanced settings” menu with a “boot from external media” option. This would surely be convenient for software upgrades, so why didn't they do that? Instead they impose security by passwords and physical locks. The abuse potential of user-reprogrammed mall kiosks is obvious, but humans are in basically the same situation. We must strike some sort of balance between easy access and security. There is a lot of conceptual malware, so we need to be careful, but we also gain vastly from our communication and collaboration with others.
In fact, mind control, or at least behavior control, is not only possible, it is everywhere in our lives. Social psychology experiments such as the Milgram experiment and the Stanford prison experiment show the surprising ease with which most people can be induced to do seemingly unreasonable things merely by asking (in the appropriate social context.) These results are often interpreted as showing our (regrettable) lack of proper moral judgment, which should be remedied by ethics classes, sensitivity training, or bible study. What we see is the workings of social coordination. Humans normally function in a social context, and our continued existence has often depended on our willingness to comply with requests that we do not understand, asking us to do things that we would not normally do.
In the current context of mind reprogramming, the point is that our ability to be influenced is subject to security measures: we must be presented with appropriate cues such as symbols of authority or the appearance of compliance by others. People have paid much attention to the possibility of mind control by political authorities, but all organized groups depend on our potential for being influenced into coordinated action. Religious groups such as churches and cults make particularly broad and heavy use of all known reprogramming techniques, though it is rarely articulated what they are doing and why.
So we can see why we wouldn't want others to be able to easily reprogram us, but why shouldn't we be able to reprogram ourselves? Why can't we simply consciously choose to do whatever we will, and to feel however we would wish? And why does feeling matter at all?
A simple (and largely correct) answer is that (as noted above), we just don't happen to have been built that way, and however useful such flexibility of our minds might be, the architecture of our brain is inherited from much simpler animals that entirely lacked conscious thought, and for whom reprogrammability made no sense. While evolution can produce remarkably well-optimized designs, it cannot entirely transcend our historical accidents of origin.
Representational Opacity is the related idea that we cannot have any direct control over (or clear insight into) unconscious processes such as our feelings, intuitions and motivations. We cannot do so because the unconscious is not structured as a logico-symbolic computer with discrete belief-states. It is instead an ad-hoc kludge of semi-regular neural networks and semi-adaptive hardwired subsystems. This is why we can effortlessly make gut judgments about complex situations, yet struggle to explain our conclusions.
In some cases our inability to control unconscious processes may also be adaptive–it may help us to succeed in life. We argue above that we are not designed to be programmable. What we now consider is that we are designed not to be programmable. This is most obviously true with life-critical processes such as visceral regulation (see Body Model.)
Also fairly obvious from the perspective of evolutionary psychology is that our unconscious perceptions, our emotions and our judgments all function the way that they do because they motivate us to act in ways that are adaptive. That is, they cause us to be successful, having children who may in turn be even more successful. The goal of evolution is not to make us happy — quite the opposite. We are evolved to be neutral or miserable much of the time. If by some drug or other means of reprogramming, we are deprived of all that carefully designed misery, then we might well choose to abandon the arduous tasks of reproduction and social engagement, and might instead withdraw from the world, selecting ourselves out of the gene pool, like Milton's Lotos Eaters, or the Shakers. We do not claim that such a choice would be wrong, only that such views will always have a minority mind share, simply because moderately discontented people are so much more motivated to propagate their ideas and their genes.
Less obviously, and more controversially, there are many life decisions that are far too important to be left to conscious whim. Evolutionary Psychology predicts that mate selection is one such area, and there have been some suggestive results. This is a large area of Intentional Opacity in the human psyche.
Note that the idea of self-reprogramming makes no sense without the understanding that there are some constraints on conscious free will. We definitely cannot feel as we choose, and frequently have difficulty acting as we choose. The target of self-reprogramming is clearly the Unconscious layers of mental processing, which do almost all of the work in the mind (see Level Map.)
All of these reprogramming techniques have been in use for thousands of years, though there have been some recent advances in hardware interventions (synthetic drugs, direct brain stimulation.) So reprogramming is largely a groovy re-branding of an old bag of tricks (see NLP.) This long history makes it seem rather unlikely that there will be any sudden dramatic advances in non-hardware interventions. The troublesome nature of emotions and of the unconscious has also been a major concern of the world-wide wisdom literature, and of early philosophy. See, for example, the chariot allegory in Plato's Phaedrus.
There is really no clear division between between the three categories of reprogramming techniques presented above, but such distinctions are still useful because they reflect the position of a particular intervention on scale of design intent. When we say that a human characteristic is intentional, we mean that it is useful treat it as a feature designed for some purpose, and that this is coherent even in the absence of an actual intelligent creator because the action of evolutionary adaptation creates this purposefulness (see Intentional Design.)
In a high-level direct approach like talk therapy, using a clearly intended interface, we expect that the going is likely to be slow, and the effects modest. With a low-level hardware hack such as a drug, effects may be sudden and dramatic, but there can also be serious side-effects. Interface hacks occupy an intermediate position. It is unlikely that they will create fast easy change (or they would have been selected against), but it is possible that by diligent practice some individuals may make dramatic changes.
The distinction between the direct approach and interface hacks can't be appreciated without Evolutionary Psychology, because without applying evolutionary theory one cannot say whether a behavior is adaptive (appears intentional) or not. While this distinction does have some validity, remember that (just as in programming) today's hack, if proven useful enough, may be selected for, and become tomorrow's recognized interface.
If the distinction between direct interventions and hacks goes unrecognized, then the difference between mind and body approaches is often exaggerated. This probably comes in part from the Level Confusion in an assumed Mind/Body Dualism. Once we concede that mind arises from the body, then it is clear that any intervention that has any effect whatsoever must necessarily change the physical state of the body. This is equally true of talk therapy and drugs.