Daniel Dennett has developed a framework that helps to explain the philosophical basis of our approach. He observes three different viewpoints or stances from which we can try to understand the behavior and organization of a thing or person:
We'll put our spin on these ideas, and then fuse the intentional and design stances to create intentional design.
Dennet presents the stances as theories for predicting behavior. He observes that adopting the design stance allows a considerable reduction in complexity because we can simply assume that our toaster will make toast without having re-derive its behavior from physics each morning, and he considers the main risk in adopting the design stance to be our disappointment when the the promised behavior is not forthcoming. “Oh, no! The toaster is broken!”
But for our (and his) purposes, the primary interest of the design stance is that it justifies reverse engineering: we can assume that each component is a partial solution to a design problem, can determine the purpose of that part, and intelligently speculate why the part takes this form rather than some other. “When the electricity runs through this wire, it will get hot, and the heat causes the desired browning reaction in the bread. The wire is made out of a nickel-chrome alloy because it combines high resistance, high melting point, and resistance to oxidization. A tin wire would melt before it got hot enough to make toast.” We will see that when we use the design stance to apply reverse engineering, we create risks that are much more philosophically troublesome than burnt toast.
We feel it is productive to divide the design stance in two: the user stance and the engineer stance. The user takes the designer's stated intent at face value, and if the performance disappoints, then either he's using it wrong or the product is defective; in neither case does the user have much concern with how the designer applied the physical stance in his design. In contrast, the engineer is primary concerned with how the components and organization achieve the intent and how the components function within physical and practical design constraints. This is true regardless of whether the engineer originally designed the toaster or not.
Interestingly, evolution actually creates good design purely from the user stance. Evolution has no understanding of why designs work or not, it just randomly tweaks the toaster design until it finds one that works better, then makes a lot of them. Though not at all what users are hoping for, and contrary to what engineering professors profess, evolutionary design is common in the real world of engineering — not just in the sense of taking a good design and knowledgeably improving it, but also in the sense of favoring some solutions over others because they work better for unknown reasons.
But even if the toaster designer didn't know why some decision was right you can still profitably apply reverse engineering by assuming that he did — you could figure out the reason the designer didn't know. The design stance is still productive because we know the designer's intent was to produce a functional toaster. We have exactly the same situation when we apply the design stance to the evolved designs of biological species.
Although it is most clearly explained using human-designed objects, the design stance is intended to be applicable to the results of evolution, and this is our primary interest. Why is it productive to apply the design stance to objects that don't have an intelligent designer? Our argument is that the applicability of the design stance doesn't require an intelligent designer, it only requires that there is a definite goal and a reasonably effective design process. Evolution does have a definite goal (survival and reproduction), and it has clearly been reasonably effective in achieving this goal.
In reverse engineering we are trying to infer the designer's subsidiary intentions from his overall intention by applying our assumption that the designer pursued his intention in an effective way. If we use the design stance to justify reverse-engineering of the products of evolution we may be led into error. Evolution is reasonably effective, but it does not generate optimal design, so it may be that there is no functional reason why something is the way it is—it need only work well enough.
There has been considerable controversy over the pervasiveness and harmfulness of this error in evolutionary theorizing, and there is a clear political subtext to this dispute, especially with respect to cultural evolution (see Just-So Stories.) However, this problem is intrinsic in reverse-engineering in general, and not unique to the process of evolution. Just as the true explanation for an evolved feature may be historic or developmental, features in a human design may also be there for historic or manufacturing reasons. You might suppose that those two holes in the heater-plate have something to do with managing convective heat flow, but they're really there so that the assembler can put his screwdriver through them. In an important way, reverse-engineering evolved objects is easier than reverse-engineering man-made objects, since in the case of evolution we are sure of the overall goal, whereas an antique kitchen gadget may be a complete mystery.
The intentional stance is designed to be applicable to objects that it would be philosophically controversial to attribute “real intentions” to. In particular, it can be applied to chess-playing computers, fish or even plants. It is so productive to regard these things as having intentions that even a philosopher realizes that to function in the world he must act as though these intentions were just as real as a rock. Dennett says that these intentions are just as real as the physical center of gravity. Though there is no unique material property at this particular location, the center of gravity summarizes important physical properties of the object as a whole, providing a simpler way to make correct predictions about the object's behavior.
It would be silly to apply the design stance to an object that was not constructed to have a particular function or to apply the intentional stance to an object that doesn't show any purposeful behavior. We don't need to infer that a rock has an intention to sit still to predict its behavior, and unless we know that a rock has been cunningly carved and artificially weathered to look its best in a rock garden we can't apply the design stance to explain its structure with respect to a design intent. So we can tentatively divide objects into three types according to the highest stance that can be usefully applied to them: an object is intentional, designed, or merely physical. Furthermore, since we consider evolved objects designed, and we will likely never run into a Boltzmann brain, there is a subset relation: all intentional objects are designed, all designed objects are physical. So intentional objects can be examined on three levels, while designed objects can be examined on two.
Clearly there is some gray area between intentional and designed objects, which is core of the philosophical controversy over the reality of machine intelligence. We can regard a designed object that appears to exhibit intentional behavior as merely exhibiting a crystallized form of the designer's intention, and not having “real intention of its own”. Many would regard thermostats and trees in this way.
In both the design and intentional stances we may be wrong about the about the designer's or agent's intentions. When we adopt evolutionary analysis we are assuming that successful reproduction is the ultimate goal. This is particularly controversial when (as in Evolutionary Psychology) we say that evolutionary success is the intention underlying almost all human activity.
In the intentional stance (as in the design stance) there is an issue with effectiveness. If the agent prefers ineffective means for achieving its intentions then we will make false predictions of its behavior. This is not as bad as it seems because we can appeal to the design stance to argue that the agent has been designed to be effective in pursuing its intentions:
There is some potential for confusing meta-circularity here because all intentional agents are designed and all designed objects have an intentional designer (or with evolution, a process that acts as though it has an intention, so we can productively apply the intentional stance and speak about evolution's intention.)
Something particularly interesting happens when we apply the evolutionary design stance to humans because humans can report their subjective impression of their intentions. If we ask what they intend to do and why, we are almost always told that their intentions originate from subjective motivations such as emotions, morality, beauty or self-improvement, and not from the presumed design intention of reproductive success. Does this mean the evolutionary design stance doesn't apply to humans?
Suppose you are the designer of an intentional agent. What intentions would you design the agent to have? The most obvious approach is transparent intentional design: give your intention to your agents. However, if you have a good idea about the best way for the agents to achieve your intention it would make sense to give the agents a multi-part strategy, where each part of the strategy has a specific intention related to it and hardwired capacities for evaluating whether that intention is being successfully pursued. That is, instead of relying on the agent's brilliance to figure out a satisfactory solution, we can sketch out the shape a solution will take.
For example, the intention of a chess-playing program is to put the opposing king in checkmate while protecting its king from the same fate. But we could give our agent a head-start by telling it to “avoid material loss” and “gain positional advantage”, and give it instincts that a knight is worth three pawns and the center of a board is a good place to be. We would likely find that this chess program with a hardwired strategy would beat the pants off of a program that started out with a blank slate and had to figure everything out from first principles. We needn't explicitly give our intention to the agent at all. We could just say that it's a really bad position when your king can't move and a really good position when their king can't move. The intentional process just chugs along avoiding material loss and gaining positional advantage, then at some point a hardwired test detects checkmate and declares the game over. This is opaque intentional design. See Intentional Opacity.