There's a sense in which what is good or evil is remarkably simple. When we consider a List of Virtues or the Seven Deadly Sins from an evolutionary perspective, our moral senses can be understood as adaptations for negotiating the realm of individual/group conflict. What is good is that which promotes productive cooperation, both within our family and our larger social groups. What is bad is that which is destructive of existing resources, divisive in our relationships, and not oriented toward overall productivity.
A good starting point for understanding morality is to look at how people normally behave in situations where moral issues are at stake. While virtues are often thought of as free-standing characteristics of behavior, when we look at concrete behavior at a fine level of detail we find that morality is highly context-dependent, and that the context is social.
There exists a long experimental tradition in social psychology—often cited, for reasons that will become obvious, under the title of “situationism”—that unsettles the globalist notions of character central in much philosophical theorizing. For example:
- Isen and Levin (1972: 387) discovered that subjects who had just found a dime were 22 times more likely to help a woman who had dropped some papers than subjects who did not find a dime (88% v. 4%).
- Darley and Batson (1973: 105) report that passersby not in a hurry were 6 times more likely to help an unfortunate who appeared to be in significant distress than were passersby in a hurry (63% v. 10%).
- Mathews and Canon (1975: 574–5) found subjects were 5 times more likely to help an apparently injured man who had dropped some books when ambient noise was at normal levels than when a power lawnmower was running nearby (80% v. 15%).
- Haney et al. (1973) describe how college students role-playing as “guards” in a simulated prison subjected student “prisoners” to intense verbal and emotional abuse.
- Milgram (1974) found that subjects would repeatedly “punish” a screaming “victim” with realistic (but simulated) electric shocks at the polite request of an experimenter.
These experiments are not aberrational, but representative: social psychologists have repeatedly found that the difference between good conduct and bad appears to reside in the situation more than the person; both disappointing omissions and appalling actions are readily induced through seemingly minor situational features. What makes these findings so striking is just how insubstantial the situational influences effecting troubling moral failures seem to be; it is not that people fail to adhere to standards for good conduct, but that the can be induced to do so with such ease. (Think about it: a dime [50 cents, inflation adjusted] may make the difference between compassionate and callous behavior.) At the same time, research predicated on the attribution of character and personality traits has enjoyed limited success in the prediction of behavior; standard measures of personality have very often been found to be tenuously related to behavior in particular situations where the expression of a given trait is expected. Moral Psychology: Empirical Approaches
Another durable and puzzling result from Social Psychology is the Attitude Behavior Gap. Although this can be seen as a more general mismatch between story and behavior, it has been studied most with moral behavior, where we often do one thing and say another. See Attitude Behavior Gap.
Animals complex enough to have behavior tend to behave in adaptive ways, those which benefit their personal survival and reproduction, even if this places them in direct conflict with other members of their species. In evolutionary theory, this behavior is freely referred to as “selfish” without any negative moral connotation. Social species do exhibit cooperative behavior as well as competition, though when altruism is present it usually favors close relatives.
Social science research shows that humans are no exception. A large part of the complex situation dependency of moral behavior seems to relate to judgments about what degree of self-serving behavior is acceptable in this particular situation, and yet (because of the Attitude-Behavior Gap) we are not aware that we are doing this. See also Story, Intentional Opacity and Representational Opacity.
Humans are unique in the degree to which we cooperate with others who are substantially unrelated to us, and yet we are also often hostile to people from other groups. For humans, it makes sense to cooperate with people who share our goals and values, to do favors for neighbors who may someday return the favor, and to seek alliances with others who seem “like us”. Yet social psychologists have found that we still tend to favor any group we find ourselves part of, even when the experiment has been constructed so that the group is entirely random, and you will never meet your other group members even once; the group has no purpose and no future. See The Cultural Animal (book).
Many philosophers have abandoned Normative Ethics, that is attempting to
Some philosophers have been showing an encouraging interest in this sort of actual moral behavior. See Moral Psychology: Empirical Approaches for a readable overview.
It's interesting to consider how single cells control their behavior in ways that (to a human) have moral aspects, even though cells clearly have no awareness of these implications. One example is the way that bacteria colonize a surface by forming a Biofilm. This slimy film often contains a host of unrelated types of bacteria that can form a cooperative community, where different bacteria eat different nutrients in the environment, with their excretions often being further digested by other bacteria. The bacteria live in a slimy extracellular matrix secreted by some of the bacteria. This helps the community to stick to the surface and can also protect them from some environmental hazards. Bacteria use chemical communication known as Quorum Sensing to activate their biofilm building behavior.
Another example from multicellular organisms (including humans) is Apoptosis or programmed cell death. This when a cell that is diseased or no longer necessary disassembles itself for easy cleanup, killing itself in the process. It is easy to see this as a sort of altruism, traditionally considered a highly moral quality of behavior.
The bible tells us that humans learned the difference between good and evil in the garden of Eden when they ate the forbidden fruit from the Tree of knowledge of good and evil. This displeased God, so he ejected Adam and Eve from Eden, and with this loss human suffering began. Evolutionary thinking about the origins of human morality carries a somewhat similar mixed message, that human moral intuition is an imperfect adaptation to the imperfect human condition. If we see morality as productive cooperation, then moral failures result in Social Conflict, both at the individual and group levels.
We almost always cooperate with others in our close social groups, but we also have to be wary of letting these same people take advantage of us. Even as we are cooperating, we are also competing to get ahead (individual/individual conflict). Individuals and societies also need to consider the oppressive and exploitative potentials, where we may be forced to make an unreasonably great sacrifice of our own interests to get a benefit that goes mainly to other group members, and also the opposing problem of individuals choosing to favor their own interests at the cost of group productivity (individual/group conflict). Passions Within Reason examines how cooperative moral feelings and motivations could have evolved in the inevitable presence of competition. In particular, he argues that the human condition causes not only the evolution of “good” moral emotions such as loyalty, but also “bad” emotions such as our disproportionate anger toward those who have harmed our interests.
We also need to consider the costs and benefits of the ways in which our group chooses to cooperate and compete with other groups. This group-level interaction can be more or less productive, just as is the case with individual cooperation and competition. Trade and cooperation toward shared goals can be win-win, where everyone benefits. At the other extreme, war is highly destructive, yet behavior during war undeniably has moral aspects such as voluntary self-sacrifice. In The Righteous Mind, Jonathan Haidt argues that we have a mechanism he calls “the hive switch” where, when our group interests are threatened, we rally to support the group, and temporarily reduce our striving to get ahead through individual competition.
Humans aren't disembodied brains (see Embodied Cognition), but the complexity of the human condition comes largely from our having complex brains that are adapted to life in a complex socially constructed world. This social world is populated by many other such beings, each with their own complex life of the mind. Human behaviors, including the moral ones, must emerge from the structure and behavior of the brain, which we can understand at the Levels of biology, chemistry and physics.
If evolutionary reasoning speculates about how we came to have our moral capacities, analysis at these lower levels seeks to explain the underlying moral mechanisms. One research area is the use of functional imaging to locate the neural basis of moral reasoning (sometimes called Neuroethics). The most interesting conclusion so far is that people arrive at predictably different answers to moral dilemmas according to which brain areas they use (which type of reasoning.) The chemical basis of neural operation is also being investigated, such as in work on the social effects of Oxytocin, see Paul Zak on Trust, Morality and Oxytocin and Oxytocin changes political preferences.
A central issue in evolutionary ethics is the Is vs. Ought problem. Evolutionary Psychology offers plausible explanations about how our moral emotions could have come to be, as part of a story of Human Nature, but this doesn't tell us what we ought to do, only what we're inclined to do. Evolution proves a theoretical framework for explaining and predicting many human behaviors: “good”, “bad” and morally neutral.
The Righteous Mind offers a convincing psychological view of the foundations of our moral intuition, but this has also been the foundation of human social behavior throughout history, in all its wonder and tragedy. Can't we do any better than this?
Our intuitive morality tends to accept:
All societies also confront issues of fairness and justice. Does it seem that the rewards and punishments given are proportionate to the contributions or offenses? Problems with fairness and justice come not so much from holes in our moral senses as from social tradeoffs. Opposing views on these tradeoffs are backed by conflicting moral intuitions, leading to political conflict. If there is a regrettable moral blindness here, it is our tendency (via in-group/out-group dynamics) to see perfect solutions when none exist. Yet to say there is a tradeoff is not to say that all answers are equally good. The quality of a particular answer depends on the specifics of the challenges that the society is facing. Unfortunately Prediction is Intractable, so progress is mostly by trying things more-or-less at random and seeing if they “work”.
There is evidence we have been doing better over time (see The Better Angels of Our Nature), so there is reason for hope.