Breaking the imaginary fence

Broken brainAt first thought, Amy Winehouse and Anders Behring Breivik have very little in common, other than having their names dominate headlines as front page news for a period in late July 2011. Yet for both of these individuals, the question of mental illness and culpability are significant in  the public discussion on their recent actions.

When Amy Winehouse died on July 23, no cause of death had been established. Nonetheless, her recreational use of drugs became the focal point of public discussion – either directly presumed to be the agent of her passing, or debated circumstantially as a contributing factor. A conflation of blame and pity whispered through the global community, ranging from outrage that more wasn’t done by her nearest and dearest to sadness that it was unlikely salvation of any sort was possible.

When Breivik carried out a mass murder in Oslo on July 22, no motive had been established. Yet bit by bit, news of his political leanings leaked into the public sphere, and soon a profile was being constructed like a psychological jigsaw puzzle that sought to explain how an individual could commit such a vile act.

In each case, the term ‘mental illness’ was thrown about. Sometimes flippantly, sometimes with all of the expertise an armchair psychologist can muster. If not these words, then synonyms were applied. Insane, mad, fucked-up, cracked, lunatic…it goes on. In each case, the actions of these two individuals were deemed to be the work of broken brains.

Conversely, others have found the flipside of this assessment relatively uncomfortable. If they had a mental illness, then their culpability was reduced. If Amy Winehouse’s addiction was a disease, it would suggest she wasn’t fully responsible for her drug abuse. Anders Breivik’s insanity could mean he wasn’t fully responsible for his brutal homicide. After all, aren’t diseases pitiful things?

The dichotomy of broken and normal neurology is deeply embedded in our sociology, paradoxically as both a form of reduced culpability and as a way of discriminating. You’re simultaneously pitied and expelled from the community. Meanwhile, a distinct line is drawn, creating a border between those with acceptable cognitive functions and those whose behaviour is functionally anomalous.

The problem here lies with the concept of blame, which is wrapped up in consequences of retribution and punishment. If Amy Winehouse didn’t have a disease, her decisions led to an outcome that was ‘deserved’. If Breivik isn’t insane, he should suffer severe discomfort for his actions. In neither case is there a pragmatic solution being offered. Rather than trying to understand the minutiae of influences and causes that lead to drug abuse or extremist ideologies, mental illness becomes a simplistic term used to isolate one’s self from undesirable behaviour.

In the end, mental health is the real loser. Variations in brain function that make social interactions difficult, dangerous or uncomfortable for individuals are turned into caricatures that serve not to provide potential solutions, but to make others feel better about their own positions.

Not being a psychologist, or having access to any of Amy Winehouse’s or Anders Behring Breivik’s medical history, I cannot diagnose either one of them. I cannot attribute terms or associate labels to catagorise their behaviour. But I do know that as humans, they experienced variations in neurology that produced actions that are – for lack of a better term – regrettable. In the end it doesn’t matter much as to which side of the imaginary mental health divide they fall on.

Published in: on July 26, 2011 at 4:52 pm  Comments (1)  
Tags:

The Others

 

Apothecary

Medicines stupid people use (nb., I'm not one of them).

“How can skeptics have a dialogue with homeopaths?” Michelle asks that modern well of insight and wisdom, ‘Yahoo’. “[W]ithout pointing out the stupidity of their arguments? I’m thinking about the paranoid ramblings about big pharma as well as the ignorance of simple science.”

Ignoring for a moment the framing of Michelle’s query, I was interested to scan through the responses for a solution two centuries of debate on the topic might have overlooked.

“Crucially, homeopaths lack the educational level to understand how their potions can only be water,” says Dave, a top contributor. Another top contributor says, “They only start with the fallacies to avoid providing evidence – so no matter what they crap on about, keep dragging them back to evidence.”

“Never argue with an idiot, they’ll drag you down to their level and beat you with experience,” says Flizbap 2.0.

And on it goes. There are some that advocate avoidance of engagement without resorting to well-poisoning or flippant antagonism, but for the most part the advice involves engaging in a behaviour anthropologists and other social scientists refer to as ‘othering‘.

Regardless of the intentions, the practice involves judgments of inferiority or impeded progress based on observations of contrasting social beliefs, behaviours and values. It is born of ethnocentrism, where observations are made with the assumption that one’s own experiences define what is objectively desirable. The result is a sense that a group of people, ‘them’, is inferior to one’s own company, or ‘us’, on account of differences in beliefs and values.

By the dawn of the 21st century, however, ethnology has had enough of an influence on the developed world that it’s become difficult to ‘other’ non-local cultures without seeming naïve or xenophobic. Most people have come to see that subsistence farming or hunter-gathering is not a mark of inferiority or low intelligence, and limited technological dependence is a socioeconomic issue rather than a cultural or cognitive failing. Openly claiming a village in the Papua New Guinea highland is ignorant, stupid or indulgent in logical fallacies would probably raise eyebrows, leading such discussions on cultural practices to be couched in less derisive terms. While the debate over racial intelligence might continue, it’s harder to find people who justify their beliefs by pointing out contrasting traditions, lifestyles or cultural practices.

However, within national borders, ethnocentrism returns with all of the ignorance of our colonial ancestors. If it’s one habit we can’t seem to shake, it’s that our nationalistic heritage has embedded in us a strong correlation between culture and country, as if by being white and sharing an accent our cultural values must be homogeneous. As a result, othering occurs far easier with those who appear (at least superficially) to share an ethnic background.

What’s missed is that within our own community there are shades of culture and sub culture that pool, ebb and overlap. Healthcare is just one example, yet one that has significant consequences beyond other examples of cultural behaviour such as art or language. Medicine in the context of a scientific product leads many to interpret healthcare as a ‘culture without a culture‘. Science and medicine is typically presented as timeless, truthful and above all, objectively correct. It’s strictly biophysical, with its sociocultural component reduced to a vestigial nub.

As such, it’s far easier to other those who demonstrate contrasting medical behaviours. Lack of intellect or education can be easily held up as reasons for their alternative beliefs without evidence, as it’s assumed that all else must be equal. As such, archaic and empty solutions such as ‘better education’ or legal enforcement is suggested as a way of making people see sense.

In truth, there is a multitude of reasons why people use alternative medicines, few of which (if any) have much of a direct link with a level of education or cognitive deficiencies. Rather, values in what constitutes good evidence, familial traditions, cultural identities and distrust of contrasting sociocultural groups play far greater roles in determining health behaviour than university degrees or brain function. In other words, the very same factors medical anthropologists deal with abroad when studying any other health culture are responsible for the same alternative beliefs in our own community.

The question on how best to address culture change is also just as relevant here as it is elsewhere. It’s all well and good that African or Indigenous communities retain their cultural heritage, but what does one do when it conflicts with treatments for HIV, alcohol abuse or diabetes? This is a question a good friend of mine is currently researching through the Australian National University; as you might expect, the resulting discussion demands more than a simplistic ‘they need better education’ or ‘they’re just stupid’. Yet it’s not a novel dilemma; whether it’s vaccination, water sanitation, nutrition or infant care, the question of how to effectively assist diverse communities in improving and maintaining their health and wellbeing has occupied anthropologists for years, producing rich debate and diverse results.

Ironically, those who propose answers for Michelle seem to identify as individuals who would normally value science as a way of presenting useful solutions to a problem. Why then do few seem to be informed by research? Why are the answers without citations or references, seeming to be based on little more than personal hunches or folk wisdom?

Based on my own experience, few would be inclined to look further as they already assume to be correct. Science works for some things…unless you already think you know, at which point it’s all rhetoric and pedantry. Social science is a soft science, therefore gut feelings and intuition are as useful (if not more so).

Michelle’s question and many of the answers reveal the roadblock we face here in our efforts to address alternative healthcare. Rather than treating it as a legitimate sociological question, where science might provide some insight, the issue is reduced to a dichotomy of smart vs. stupid, of educated vs. naive. When those are the questions we ask, we certainly can’t hope for any productive answers.

Saint Nicked

Ho, ho, ho - Merry Christmas evil child!

I’m far from the first parent in history to weigh up the pros and cons of indulging their child in the wonder that is the Santa mythos.

On the one hand, as somebody who values rational thinking, I find it hard to reconcile my desire to encourage an appreciation for the wonders of the universe with the necessity to lie, or at least play word games, in order to weave such fantasy. But I also cannot downplay the power of social beliefs, nor simplify the processes by which we learn how to think critically into a strictly didactic exercise.

In raising this dilemma among other parents, a common response is an aghast,’Oh, but Santa adds such magic to a child’s life,’ followed by the insinuation of how it’s important to facilitate imagination and wonder with folktales. Perhaps, but the Santa mythos hardly communicates the social values I want my son to embrace.

For example, I’ve never been comfortable with using material goods – whether it’s a present, food or money – as a means of discipline. Sure, teaching a kid to work towards a reward is great, but telling a child that a magical being will bring them a toy if they behave isn’t in line with my form of behavioural management (either as a teacher or parent). Thankfully in modern Australia, I can avoid the need to deal with the traditional European contrasting figures of Black Peter or the devil Krampus, who physically punish or kidnap those who are naughty.

Secondly, the ‘magic’ of Christmas is tangled with the joy of a stranger bringing them something. Less than being about family and the joy in engaging in gift exchanges or cooking together or visiting friends or relatives you don’t get a chance to see often during the year, the excitement is more about the supernatural transportation of loot into the living room on Christmas Eve.

If it sounds like I’m staunchly anti-Sinta Klaas, you’d be half right. There is a side of me that feels that there are potentially useful lessons that can be communicated via the Nick narrative. For example, unlike most religious beliefs, it is one that is traditionally accepted as having an end point. Nobody expects adults to continue to believe that a fat, bearded elf will bring them white goods and i-Pods care of gravity-defying hoofed mammals if they refrain from breaking the law. This ‘exit clause’ (*ahem*) provides social pressure for older children to critically consider the role and persistence of myths.

Just as I believe religious schools combined with critical thinking in the curriculum creates more atheists, it’s possible that the Santa mythos – in spite of its conflicting values – might paradoxically teach the beauty of science and reason in the face of impossible tales. Could the disenchantment of a relatively ‘harmless’ belief system act like a practice run for religious stories? Is there merit in the thought of Santa’s demise founding the way for other iconic deaths?

Maybe. But that still doesn’t make it an easy sell for me. I’m  not sure how I’ll comfortably nod at my son at Christmas when he asks if Santa is coming. I don’t think I’ll find it any easier to spin answers to ‘Is he real?’, claiming ‘He is if you believe enough’. Yes, I can turn it back on him when he asks, reflecting the query and nudging his critical evaluation in the right direction, as I would with any other curious inquisition.

But as others speak enthusiastically of this traditional jolly fellow’s nocturnal gallumphings, how loud will my silence on the matter sound? One thing I’m sure of; on Christmas morning in years to come, when he is old enough to appreciate it, there will be at least one gift under that tree addressed to my son that isn’t labeled ‘from Santa’.

Published in: on December 16, 2010 at 10:57 am  Comments (2)  
Tags: , ,

ACARA’s bent spoon

Charles Darwin - the perfect anti-creationist picture

Let me be upfront and honest about something – I’m no great fan of anti-accolades at the best of times. You know the ones; an ‘award’ for the worst dressed/stupidest/most laughable actor/book/production/product and so on. I simply don’t see the point, outside of a smug satisfaction that the awarder feels in superiority to the awardee. But, given human nature, I rarely say much as it’s hardly worthy of comment.

I didn’t attend TAMOz this year for numerous reasons. But I did hear on the grapevine that the annual ‘Bent Spoon Award’ was presented in absentia to the Australian Curriculum Assessment and Reporting Authority (ACARA) for the imminent National Curriculum science framework.

And, frankly, I was pretty gobsmacked.

First, some background. The Bent Spoon Award is an annual raspberry blown by the Australian Skeptics at ‘the perpetrator of the most preposterous piece of paranormal or pseudo-scientific piffle.’ As such, it must have been decided that of all paranormal and pseudoscientific acts, products and claims made in 2010, the National Curriculum must be the worst offender. Given I’d spent a good part of the year reading through it, I naturally presumed there was a sizable chunk of witchcraft, alchemy, geocentrism, voodoo or spiritualism I must have missed.

Fortunately that’s not the case. What was it that was so offensive in this draft framework? According to the nomination, it was for ‘devaluing the teaching of evolution in schools, allowing creationism to be taught, and for teaching alternative theories such as traditional Chinese medicines and Aboriginal beliefs as part of the Science Curriculum.’

Not for removing evolution altogether, and going down the dark path of Texan education. Not for putting creationism or intelligent design or Raelianism into the year 7 classroom. Nothing quite so definitive. It was a vague ‘devaluing’ of education that Australian Skeptics wanted to advertise to the world as the most deserving of scorn over all other media items, pseudoscientific products or audacious claims. One would hope that they had some pretty strong evidence to support the connection between ACARA’s choice of content and a loss of educational ‘value’.

Education is, of course, valuable. Anything that reduces the effectiveness of the system in preparing children and adolescents for their future should be addressed. A quick flick through my blog is enough to gauge my views as far as the topic goes. Indeed, it’s so important, I take claims that it is ‘devalued’ quite seriously.

Before we look at the criticism and ask whether the curriculum really warrants its prize, it might pay to quickly establish some context.

Australian education is a responsibility of the state level of government. As such, all states have an education act that prescribes what and how people will be taught important knowledge and skills. An example can be found at Queensland’s Department of Education and Training site. All states have similar documentation, which in part dictates the creation of units or subjects in schools that reflect a curriculum created by a state body. How this is assessed varies between the states, but typically includes the collection of student work samples along with a syllabus that demonstrates significant effort has been made to follow the curriculum’s framework.

It has been a concern for a long time that while there is strong similarity between all state curricula, order and timing of the skills and content taught has the potential to create difficulty for population movement. A student whose family moves from Perth to Sydney might face disadvantages by having missed some topics while replicating others. This led to growing support for a national curriculum.

In April 2008 a national curriculum board was put together with the purpose of meeting this challenge. In early 2010, ACARA released Phase 1 of its kindergarten to year 10 National Curriculum for feedback, which covered mathematics, English and science subjects. Later it opened Phase 2, covering geography, language and the arts for public review.

It appears that science is the field that will be revised most rigorously in the future, as the dominating criticism through feedback was that a sense of perspective was lost by focusing on certain details. To go into detail on my personal views on the strengths and shortfalls of the document would fall outside of the scope of this post. Overall, in spite of certain small reservations, I felt as far as science went it was a robust framework that balanced the diverse needs of the community it was serving.

Having been responsible for working on unit plans and analysing curriculum frameworks, I can sympathise with their creators, especially when faced with a wall of teachers and community members who feel their particular pet field is more important than the others. We’re all familiar with the ‘overcrowded curriculum’, and knowing what is vital for the future citizen to know is no easy task. Keeping everybody happy while delivering a working structure is a nightmare.

Which brings us to the nomination for the Bent Spoon. It states that the teaching of evolution;

‘become virtually sidelined, appearing in one section of Year 10 only.’

Evolution does indeed appear explicitly as the first point in Science Understanding in Year 10. I question the author where else he feels it should be. In my experience as an educator, covering it explicitly as a topic in its own right is difficult prior to a student’s ability to grasp abstract concepts, which more or less rules out going into much depth before years 6 or 7. I’ve never covered it as a concept before year 10. Of course, content based on biological categorisation – which is important for grasping evolution later – can be covered, and is in year 4. Physiological adaptations are usually covered in year 8 or 9, although fossils (and discussing ancient animals) is covered early in about year 3. Genetics typically works alongside evolution in year 10 (as is the case here). So while the word only appears once, concepts that are fundamental to understanding evolution litter the curriculum.

“The evolution of man is not part of the syllabus, and all the examples of evolution given as ‘Elaborations’ in the syllabus deal with non-controversial or small scale applications of natural selection (e.g. ‘the impact of cane toads on the evolution of Australian Predators such as snakes.’)”

The evolution of man has never been part of the K-10 curriculum of any state to my knowledge (happy to be corrected) as it is covered in detail in senior subjects. One can argue for it being moved forward or made compulsory, and I can think of arguments for and against doing so. However, I can’t but help that this isn’t being argued with a pragmatic necessity in mind, but rather as a defensive posture against potential religious indoctrination.

But more on that later.

The elaborations in the document aren’t official necessities, but suggested guidelines on how it might be approached. They are typically suggested with relevance to prior knowledge in mind rather to make it easier for the teacher to determine a useful way to introduce the topic.

What of teaching that dreadful ‘Aboriginal’ science? What does the framework have to say about that?

Specific knowledge and understanding of Aboriginal and Torres Strait Islander peoples is incorporated where it relates to science and relevant phenomena, particularly knowledge and  understanding of nature and of sustainable practices. For example, systematic observations by Aboriginal and Torres Strait Islander cultures over many generations of the sequence of various natural events contribute to our scientific understanding of seasons in Australia.”

And,

“Students should learn that all sorts of people, including people like themselves, use and contribute to science. Historical studies of science, mathematics and technology in the early Egyptian, Greek, Chinese, Arabic and Aboriginal and Torres Strait Islander cultures extending to modern times will help students understand the contributions of people from around the world.”

Given my upcoming book more or less goes into some depth on this topic, I’d like to think I’ve got something of an informed opinion. I feel the demarcation problem makes it difficult to describe precisely what science is and isn’t. Indigenous Australians have developed systems of describing nature, which might be viewed as scientific. They definitely created technology. Personally, I’m inclined to define science as values that described natural events in an impersonal fashion, so would see this as an interesting contrast, where I’d present to students the question ‘Is science the same as technology?’. Nonetheless, I think it’s a valuable contrast in the classroom, and one (when taught in accordance to the skills implicit in the curriculum) that would benefit student’s comprehension of how science isn’t simply defined.

“Thus the syllabus leaves open the option of teaching Creationism, while teaching just the basic theory of Natural Selection to Year 10 students only, omitting any reference to the evolution of man, and not mentioning Darwin once. This must be of great concern to sceptics as this document will form the basis of Science teaching for the next generation.”

Here’s the core of the matter. ACARA was found guilty of peddling pseudoscience because there it didn’t seal up the cracks, preventing the possibility of creationists slipping their venom into the ears of kiddies. That’s it – it didn’t account for the reds under the beds.

What troubles me most is that in spite of a greater focus on good scientific thinking, in spite of a move towards evaluative tools and promoting a critical epistemology more than any prior document, they got their wrist slapped because they didn’t put in enough Darwin. The assumption is that this is what impedes creationism in the classroom – evolution put in bold ink and underlined in a state-enforceable document.

There is a valid concern of pseudoscience slipping into the curriculum. I’ve encountered it all before – teachers who believe that the spin of a planet causes gravity; conspiracy theories; dolphins are a type of fish. But greater detail in the curriculum would not have made a lick of difference, given the existing documents failed to dissuade such errors or misinformation. Putting another evolution topic in primary school and adding Darwin to the list of great scientists will not safeguard schools against creationist teaching, and for that to be the focus of attack demonstrates a complete ignorance of pedagogy.

For skeptics, nothing should be more important than the arming of students with the fundamental skills that allow them to hear nonsense and identify it. This is not a question of content, but skills.

What does make a difference, then? A number of things. Better trained teachers. A school culture that reinforces cross-curricular skills. Improved career prospects for teaching and non-teaching staff. Good resources. Community involvement in the classroom. It’s not a simple solution, let alone one that can be addressed through a liberal dose of public mockery. Rather, differences are made by proactively contributing to the discussion with good information on pedagogy, cognitive psychology, best classroom practices etc.

The draft science curriculum is certainly not without its faults, and can definitely stand improvement. Nobody would argue otherwise. Informed and constructive feedback is vital and groups like the Australian Skeptics should well have pulled together a team of sceptical educators and produced well supported feedback grounded in research, which could have been promoted on its website to demonstrate its measured approach to education. Discussing how to go about this would be worthwhile.

Yet what is the likelihood of their being taken seriously by any curriculum council when its response is to instead ridicule ACARA, effectively calling them pseudoscientists because their conclusions don’t have enough evolution for their liking? Not great, I’m afraid.

Published in: on November 27, 2010 at 9:14 pm  Comments (18)  
Tags: , , ,

The moral objective

Laws against murder are so common across diverse cultures, it’s hard to not think of it as a rule embedded within our genes. Even in times of war, we’re more attuned to bluff and posture than to murder humans labelled as our enemies. In the Vietnam War, it was calculated that there was only one hit for every 52 000 bullets fired. Yet is this aversion to killing the same as a scientific law? And if so, can morality be justified by such rules of nature?

There has become a trend in recent years to view morality less as a product of our cultural heritage and more of a behaviour that benefits our survival as a species. Some moralistic concepts are easy to tie in with evolution – a species that is comfortable with wanton, indiscriminate killing of its own individuals might not be as fit as one that isn’t. Incest creates a ‘yuck’ factor in so many societies that could imply a fundamental aversion to the problems associated with inbreeding. Yet one need only look at cultural relativity and the deviations in morality to see there is far more to the question than genetics is capable of explaining.

At first glance it seems to be the old nature vs. nurture dilemma. However moving past the dichotomy, one is left pondering the extent to which variations in genetics might determine a community’s moral values, and explaining how morality can vary so significantly over just a few generations.

Numerous philosophers have proposed universal systems of morality throughout history. Plato maintained that it was possible to consider something as virtuous based on its metaphysical form, or ideal concept. The German philosopher Immanuel Kant objectified morality by describing a concept called the ‘categorical imperative’, where similar to the golden rule, a person should act in the same manner they would expect of anybody in the same situation. Other systems are inseparable from religious opinions, deferring to a supernatural entity for a system of moral laws.

Using science to explain not just the role of moral behaviour, but to quantify and evaluate it, barely dates back a century or two. Charles Darwin considered altruism in animals as an evolved social trait, but struggled to describe how it might benefit individuals within a group. Yet fellow biologist Thomas Huxley expressed doubt on the issue in his 1893 book, ‘Evolution and Ethics’, claiming that the prevalence of both moral and immoral behaviour makes it difficult to objectively ascribe one with greater benefits via reason alone. He writes, “But as the immoral sentiments have no less been evolved, there is so far as much natural sanction for the one as the other. The thief and the murderer follow nature just as much as the philanthropist.” Similarly, the Scottish philosopher David Hume warned against confusing what ‘is’ – as we observe it – with what ‘ought’ to be as judged by reason.

There are those who believe it is possible to scientifically derive how we ‘ought’ to behave. The writer Sam Harris proposes that the intrinsic goal of morality is to promote the ‘flourishing of those of conscious minds’. In other words, ideally our morals should lead to the sustaining of our sense of personal and communal wellbeing, thus allowing our community to persist and possibly grow. He argues that science can indeed be applied to analysing morality.

At its core, morality is about values, which in themselves are a hierarchy of desires as perceived by an individual within particular contexts. For example, I can value money over food, unless I’m starving. I might value the idea of a family as a core unit of society, but only if it’s defined on the basis of a heterosexual couple. On the other hand,  science deals with the absolutes of facts, which don’t vary with subjective contexts and are required to be worded to reflect this.

Harris argues that values can also be worded in a factual manner, in that it can be factually demonstrated that some people desire happiness or good health. This, in turn, provides a means of quantifying the moral behaviour.

I’d venture that few people would argue that this was wrong, at least within some contexts. I might believe in treating other people well because I can then expect them to treat me well in return. The so-called golden rule can improve interpersonal relations. Any person’s behaviour can be evaluated against the probability of attaining their fact-based desire. If science can be used to determine a probable contradiction, it’s possible that the moral behaviour can be viewed as flawed.

But not all moral behaviours have such explicit links. If I argue it’s wrong to abort a foetus, do I do it solely because I think the foetus might feel pain? Is it because I fear for a slippery slope? Is it because I have a blanket rule about living systems? Do I derive it from a fear of God’s punishment? Most important of all, do I engage in this moral behaviour out of a deliberate attempt to achieve a clear goal, or is it simply something I’ve inherited from my community? We might invent an outcome, but there’s no guarantee that the moral behaviour has any clear intention.

Neurologically speaking, the evolution of our behaviour as social animals could definitely explain a tendency to develop a culture of morals, arising from the same tribal tendencies as many of our thinking behaviours in order to create a cohesive social structure. In one sense, murder within one’s tribe can be considered antagonistic to a biological law of nature, drawing a line between a scientifically determined rule and a moral code on how we ‘ought’ to behave.

By the same token we can state we have a moral obligation to have sex, not pollute our environment and encourage diversity within our gene pool, so long as we observe the context of community wellbeing. We can categorically describe certain behaviours as universally ‘good’ and others as fundamentally ‘bad’ in an absolute context of the health and wellbeing of the collective.

But what of communities which demonstrably contravene such laws? What of tribes that engage in ritualistic sacrifice, kill enemies or commit infanticide? Within such contexts, it seems that our psychology not only allows it, the action doesn’t appear to be necessarily detrimental to the continued existence of the group. Rape, genital mutilation, oppression of certain classes or castes…all could be argued on the back of an evolutionary appeal to be morally sound given such communities persist and certain groups within the community personally consider their overall wellbeing to have been improved by it. The Mayans, known for killing their own in dedication to their deities, did not die off as a direct result of this particular cultural practice and viewed their lives as improved as a result. There is little evidence of infanticide committed in cultures such as ancient Greece and Rome having a negative impact on the wellbeing of the community; if anything, the practice in many communities, especially those of prehistory, might be considered to be beneficial in negotiating times of hardship.

Harris tends to gloss over the subjectivity of well-being as it varies between individualist and collectivist communities, arguing that there is a spectrum of ‘brain states’ which all people could agree constituted good and bad. Without the means to test this, we can only disagree. Yet it wouldn’t be difficult to find anomalies for concepts we would readily assume fell into this ‘spectrum’. Of course, it might be said that such anomalies were necessarily mad, deranged or fools…yet this begs some circular reasoning. Not to mention an ignorance of cultural contexts that can put subtle variations on how we each interpret something as ultimately a good sensation or a bad sensation (hair shirt, anyone?).

In addition, he presumes that through careful consideration, morals can be viewed as absolutely right or wrong in a context of whether it is conducive to the flourishing of a conscious mind. He feels with time, science will overtake religion as a means of determining which behaviours are moralistic. Paradoxically, what of evidence suggesting that a community of religious faith carries personal and communal benefits? Arguing that a belief in a god is not scientifically moralistic could be detrimental if we rely on Harris’s argument, even if the intended action behind the behaviour is unsupported.

Academically, attuning the concept of morality to concern behaviours benefiting evolutionary success or even the wellbeing of an individual or a community is feasible, even if it in itself is a moral value by definition. Practically, however, if we’re to take a position of evolved morality, it has never had academic foresight, operating under selective conditions that see corrupting morals die off as the community crumbles. For some people, forcing these progressive mechanics of human morality to subscribe to an audit of reason could be like using your moped to pull a semi-trailer. The engine simply isn’t compatible with this task. Morals evolve under social forces, not isolated personal reflection.

Historically, nature has blindly maximised the odds. Greater diversity makes for more rolls of the dice and a greater chance of another generation of life persisting. Estimating which behaviours are most compatible with evolution by using science is more akin to the frequency matching preferred by our brain’s left hemisphere. Given enough information, it might be possible to bias the odds and determine which morals are truly ‘good’ and which are ‘bad’ for the survival of the community or the happiness of a single person. But is this truly the same as morals that are objectively right or wrong?

Even if we can evaluate our morals accordingly, and presuming this value in its own right is indeed desired, we’re still left with brains that resist the urge to abandon old moral codes under the whim of reason. What might be an academically negotiation isn’t necessarily a pragmatic one.

Science can supply information that could influence our choice of moral behaviours, of course. A person might support the death penalty, but only if they’re confident in the guilt of the offender. They could be persuaded to circumcise their child if they could be convinced that the benefits outweighed the risks. Drug use might be tolerated, but only if it’s concluded that the chance of physiological harm is minimal.

Should we allow people to choose the moment at which they will die? Is it right to allow a woman to abort their own foetus? Do the rights of the many truly outweigh the rights of the few? Each can be associated with wellbeing in some context, but only if morality is shoe-horned into a tight definition.

Of course, far from being distinct tribes defined by an exclusive set of values, there is significant diversity of values within any community. A scientist can be religious, holding rational values that inform some decisions and spiritual values that inform others. Conversely, a priest can be against abortion as it contravenes what they believe to be a religious law, while use science to support their belief in recycling waste. Drawing a neat line around any one group of individuals and defining them by exclusive sets of values is nigh impossible, given we all belong to multiple tribes.

As such, we commonly confuse how we ‘should’ behave with how we do, conflating morals and an appreciation of science in an effort to justify our beliefs. When that happens, we can be quick to mislabel a belief as scientific in an effort to satisfy any conflict between our spectrum of personal and communal values.

My arguments aren’t exactly novel. Sean Carroll does a better job of outlining them here, and Harris offers a strong (but, in my opinion, still insufficient) rebuttal here. And the discussion is one well worth having. Yet at a time when atheists are eagre to challenge how society views religion, risking the promotion of poorly reasoned conclusions in an effort to convince the public how the faithful don’t have a monopoly on morality isn’t an advisable strategy.

Published in: on October 15, 2010 at 1:35 pm  Comments (2)  
Tags: , ,