Laws against murder are so common across diverse cultures, it’s hard to not think of it as a rule embedded within our genes. Even in times of war, we’re more attuned to bluff and posture than to murder humans labelled as our enemies. In the Vietnam War, it was calculated that there was only one hit for every 52 000 bullets fired. Yet is this aversion to killing the same as a scientific law? And if so, can morality be justified by such rules of nature?
There has become a trend in recent years to view morality less as a product of our cultural heritage and more of a behaviour that benefits our survival as a species. Some moralistic concepts are easy to tie in with evolution – a species that is comfortable with wanton, indiscriminate killing of its own individuals might not be as fit as one that isn’t. Incest creates a ‘yuck’ factor in so many societies that could imply a fundamental aversion to the problems associated with inbreeding. Yet one need only look at cultural relativity and the deviations in morality to see there is far more to the question than genetics is capable of explaining.
At first glance it seems to be the old nature vs. nurture dilemma. However moving past the dichotomy, one is left pondering the extent to which variations in genetics might determine a community’s moral values, and explaining how morality can vary so significantly over just a few generations.
Numerous philosophers have proposed universal systems of morality throughout history. Plato maintained that it was possible to consider something as virtuous based on its metaphysical form, or ideal concept. The German philosopher Immanuel Kant objectified morality by describing a concept called the ‘categorical imperative’, where similar to the golden rule, a person should act in the same manner they would expect of anybody in the same situation. Other systems are inseparable from religious opinions, deferring to a supernatural entity for a system of moral laws.
Using science to explain not just the role of moral behaviour, but to quantify and evaluate it, barely dates back a century or two. Charles Darwin considered altruism in animals as an evolved social trait, but struggled to describe how it might benefit individuals within a group. Yet fellow biologist Thomas Huxley expressed doubt on the issue in his 1893 book, ‘Evolution and Ethics’, claiming that the prevalence of both moral and immoral behaviour makes it difficult to objectively ascribe one with greater benefits via reason alone. He writes, “But as the immoral sentiments have no less been evolved, there is so far as much natural sanction for the one as the other. The thief and the murderer follow nature just as much as the philanthropist.” Similarly, the Scottish philosopher David Hume warned against confusing what ‘is’ – as we observe it – with what ‘ought’ to be as judged by reason.
There are those who believe it is possible to scientifically derive how we ‘ought’ to behave. The writer Sam Harris proposes that the intrinsic goal of morality is to promote the ‘flourishing of those of conscious minds’. In other words, ideally our morals should lead to the sustaining of our sense of personal and communal wellbeing, thus allowing our community to persist and possibly grow. He argues that science can indeed be applied to analysing morality.
At its core, morality is about values, which in themselves are a hierarchy of desires as perceived by an individual within particular contexts. For example, I can value money over food, unless I’m starving. I might value the idea of a family as a core unit of society, but only if it’s defined on the basis of a heterosexual couple. On the other hand, science deals with the absolutes of facts, which don’t vary with subjective contexts and are required to be worded to reflect this.
Harris argues that values can also be worded in a factual manner, in that it can be factually demonstrated that some people desire happiness or good health. This, in turn, provides a means of quantifying the moral behaviour.
I’d venture that few people would argue that this was wrong, at least within some contexts. I might believe in treating other people well because I can then expect them to treat me well in return. The so-called golden rule can improve interpersonal relations. Any person’s behaviour can be evaluated against the probability of attaining their fact-based desire. If science can be used to determine a probable contradiction, it’s possible that the moral behaviour can be viewed as flawed.
But not all moral behaviours have such explicit links. If I argue it’s wrong to abort a foetus, do I do it solely because I think the foetus might feel pain? Is it because I fear for a slippery slope? Is it because I have a blanket rule about living systems? Do I derive it from a fear of God’s punishment? Most important of all, do I engage in this moral behaviour out of a deliberate attempt to achieve a clear goal, or is it simply something I’ve inherited from my community? We might invent an outcome, but there’s no guarantee that the moral behaviour has any clear intention.
Neurologically speaking, the evolution of our behaviour as social animals could definitely explain a tendency to develop a culture of morals, arising from the same tribal tendencies as many of our thinking behaviours in order to create a cohesive social structure. In one sense, murder within one’s tribe can be considered antagonistic to a biological law of nature, drawing a line between a scientifically determined rule and a moral code on how we ‘ought’ to behave.
By the same token we can state we have a moral obligation to have sex, not pollute our environment and encourage diversity within our gene pool, so long as we observe the context of community wellbeing. We can categorically describe certain behaviours as universally ‘good’ and others as fundamentally ‘bad’ in an absolute context of the health and wellbeing of the collective.
But what of communities which demonstrably contravene such laws? What of tribes that engage in ritualistic sacrifice, kill enemies or commit infanticide? Within such contexts, it seems that our psychology not only allows it, the action doesn’t appear to be necessarily detrimental to the continued existence of the group. Rape, genital mutilation, oppression of certain classes or castes…all could be argued on the back of an evolutionary appeal to be morally sound given such communities persist and certain groups within the community personally consider their overall wellbeing to have been improved by it. The Mayans, known for killing their own in dedication to their deities, did not die off as a direct result of this particular cultural practice and viewed their lives as improved as a result. There is little evidence of infanticide committed in cultures such as ancient Greece and Rome having a negative impact on the wellbeing of the community; if anything, the practice in many communities, especially those of prehistory, might be considered to be beneficial in negotiating times of hardship.
Harris tends to gloss over the subjectivity of well-being as it varies between individualist and collectivist communities, arguing that there is a spectrum of ‘brain states’ which all people could agree constituted good and bad. Without the means to test this, we can only disagree. Yet it wouldn’t be difficult to find anomalies for concepts we would readily assume fell into this ‘spectrum’. Of course, it might be said that such anomalies were necessarily mad, deranged or fools…yet this begs some circular reasoning. Not to mention an ignorance of cultural contexts that can put subtle variations on how we each interpret something as ultimately a good sensation or a bad sensation (hair shirt, anyone?).
In addition, he presumes that through careful consideration, morals can be viewed as absolutely right or wrong in a context of whether it is conducive to the flourishing of a conscious mind. He feels with time, science will overtake religion as a means of determining which behaviours are moralistic. Paradoxically, what of evidence suggesting that a community of religious faith carries personal and communal benefits? Arguing that a belief in a god is not scientifically moralistic could be detrimental if we rely on Harris’s argument, even if the intended action behind the behaviour is unsupported.
Academically, attuning the concept of morality to concern behaviours benefiting evolutionary success or even the wellbeing of an individual or a community is feasible, even if it in itself is a moral value by definition. Practically, however, if we’re to take a position of evolved morality, it has never had academic foresight, operating under selective conditions that see corrupting morals die off as the community crumbles. For some people, forcing these progressive mechanics of human morality to subscribe to an audit of reason could be like using your moped to pull a semi-trailer. The engine simply isn’t compatible with this task. Morals evolve under social forces, not isolated personal reflection.
Historically, nature has blindly maximised the odds. Greater diversity makes for more rolls of the dice and a greater chance of another generation of life persisting. Estimating which behaviours are most compatible with evolution by using science is more akin to the frequency matching preferred by our brain’s left hemisphere. Given enough information, it might be possible to bias the odds and determine which morals are truly ‘good’ and which are ‘bad’ for the survival of the community or the happiness of a single person. But is this truly the same as morals that are objectively right or wrong?
Even if we can evaluate our morals accordingly, and presuming this value in its own right is indeed desired, we’re still left with brains that resist the urge to abandon old moral codes under the whim of reason. What might be an academically negotiation isn’t necessarily a pragmatic one.
Science can supply information that could influence our choice of moral behaviours, of course. A person might support the death penalty, but only if they’re confident in the guilt of the offender. They could be persuaded to circumcise their child if they could be convinced that the benefits outweighed the risks. Drug use might be tolerated, but only if it’s concluded that the chance of physiological harm is minimal.
Should we allow people to choose the moment at which they will die? Is it right to allow a woman to abort their own foetus? Do the rights of the many truly outweigh the rights of the few? Each can be associated with wellbeing in some context, but only if morality is shoe-horned into a tight definition.
Of course, far from being distinct tribes defined by an exclusive set of values, there is significant diversity of values within any community. A scientist can be religious, holding rational values that inform some decisions and spiritual values that inform others. Conversely, a priest can be against abortion as it contravenes what they believe to be a religious law, while use science to support their belief in recycling waste. Drawing a neat line around any one group of individuals and defining them by exclusive sets of values is nigh impossible, given we all belong to multiple tribes.
As such, we commonly confuse how we ‘should’ behave with how we do, conflating morals and an appreciation of science in an effort to justify our beliefs. When that happens, we can be quick to mislabel a belief as scientific in an effort to satisfy any conflict between our spectrum of personal and communal values.
My arguments aren’t exactly novel. Sean Carroll does a better job of outlining them here, and Harris offers a strong (but, in my opinion, still insufficient) rebuttal here. And the discussion is one well worth having. Yet at a time when atheists are eagre to challenge how society views religion, risking the promotion of poorly reasoned conclusions in an effort to convince the public how the faithful don’t have a monopoly on morality isn’t an advisable strategy.