The Others

 

Apothecary

Medicines stupid people use (nb., I'm not one of them).

“How can skeptics have a dialogue with homeopaths?” Michelle asks that modern well of insight and wisdom, ‘Yahoo’. “[W]ithout pointing out the stupidity of their arguments? I’m thinking about the paranoid ramblings about big pharma as well as the ignorance of simple science.”

Ignoring for a moment the framing of Michelle’s query, I was interested to scan through the responses for a solution two centuries of debate on the topic might have overlooked.

“Crucially, homeopaths lack the educational level to understand how their potions can only be water,” says Dave, a top contributor. Another top contributor says, “They only start with the fallacies to avoid providing evidence – so no matter what they crap on about, keep dragging them back to evidence.”

“Never argue with an idiot, they’ll drag you down to their level and beat you with experience,” says Flizbap 2.0.

And on it goes. There are some that advocate avoidance of engagement without resorting to well-poisoning or flippant antagonism, but for the most part the advice involves engaging in a behaviour anthropologists and other social scientists refer to as ‘othering‘.

Regardless of the intentions, the practice involves judgments of inferiority or impeded progress based on observations of contrasting social beliefs, behaviours and values. It is born of ethnocentrism, where observations are made with the assumption that one’s own experiences define what is objectively desirable. The result is a sense that a group of people, ‘them’, is inferior to one’s own company, or ‘us’, on account of differences in beliefs and values.

By the dawn of the 21st century, however, ethnology has had enough of an influence on the developed world that it’s become difficult to ‘other’ non-local cultures without seeming naïve or xenophobic. Most people have come to see that subsistence farming or hunter-gathering is not a mark of inferiority or low intelligence, and limited technological dependence is a socioeconomic issue rather than a cultural or cognitive failing. Openly claiming a village in the Papua New Guinea highland is ignorant, stupid or indulgent in logical fallacies would probably raise eyebrows, leading such discussions on cultural practices to be couched in less derisive terms. While the debate over racial intelligence might continue, it’s harder to find people who justify their beliefs by pointing out contrasting traditions, lifestyles or cultural practices.

However, within national borders, ethnocentrism returns with all of the ignorance of our colonial ancestors. If it’s one habit we can’t seem to shake, it’s that our nationalistic heritage has embedded in us a strong correlation between culture and country, as if by being white and sharing an accent our cultural values must be homogeneous. As a result, othering occurs far easier with those who appear (at least superficially) to share an ethnic background.

What’s missed is that within our own community there are shades of culture and sub culture that pool, ebb and overlap. Healthcare is just one example, yet one that has significant consequences beyond other examples of cultural behaviour such as art or language. Medicine in the context of a scientific product leads many to interpret healthcare as a ‘culture without a culture‘. Science and medicine is typically presented as timeless, truthful and above all, objectively correct. It’s strictly biophysical, with its sociocultural component reduced to a vestigial nub.

As such, it’s far easier to other those who demonstrate contrasting medical behaviours. Lack of intellect or education can be easily held up as reasons for their alternative beliefs without evidence, as it’s assumed that all else must be equal. As such, archaic and empty solutions such as ‘better education’ or legal enforcement is suggested as a way of making people see sense.

In truth, there is a multitude of reasons why people use alternative medicines, few of which (if any) have much of a direct link with a level of education or cognitive deficiencies. Rather, values in what constitutes good evidence, familial traditions, cultural identities and distrust of contrasting sociocultural groups play far greater roles in determining health behaviour than university degrees or brain function. In other words, the very same factors medical anthropologists deal with abroad when studying any other health culture are responsible for the same alternative beliefs in our own community.

The question on how best to address culture change is also just as relevant here as it is elsewhere. It’s all well and good that African or Indigenous communities retain their cultural heritage, but what does one do when it conflicts with treatments for HIV, alcohol abuse or diabetes? This is a question a good friend of mine is currently researching through the Australian National University; as you might expect, the resulting discussion demands more than a simplistic ‘they need better education’ or ‘they’re just stupid’. Yet it’s not a novel dilemma; whether it’s vaccination, water sanitation, nutrition or infant care, the question of how to effectively assist diverse communities in improving and maintaining their health and wellbeing has occupied anthropologists for years, producing rich debate and diverse results.

Ironically, those who propose answers for Michelle seem to identify as individuals who would normally value science as a way of presenting useful solutions to a problem. Why then do few seem to be informed by research? Why are the answers without citations or references, seeming to be based on little more than personal hunches or folk wisdom?

Based on my own experience, few would be inclined to look further as they already assume to be correct. Science works for some things…unless you already think you know, at which point it’s all rhetoric and pedantry. Social science is a soft science, therefore gut feelings and intuition are as useful (if not more so).

Michelle’s question and many of the answers reveal the roadblock we face here in our efforts to address alternative healthcare. Rather than treating it as a legitimate sociological question, where science might provide some insight, the issue is reduced to a dichotomy of smart vs. stupid, of educated vs. naive. When those are the questions we ask, we certainly can’t hope for any productive answers.

Advertisements

Saint Nicked

Ho, ho, ho - Merry Christmas evil child!

I’m far from the first parent in history to weigh up the pros and cons of indulging their child in the wonder that is the Santa mythos.

On the one hand, as somebody who values rational thinking, I find it hard to reconcile my desire to encourage an appreciation for the wonders of the universe with the necessity to lie, or at least play word games, in order to weave such fantasy. But I also cannot downplay the power of social beliefs, nor simplify the processes by which we learn how to think critically into a strictly didactic exercise.

In raising this dilemma among other parents, a common response is an aghast,’Oh, but Santa adds such magic to a child’s life,’ followed by the insinuation of how it’s important to facilitate imagination and wonder with folktales. Perhaps, but the Santa mythos hardly communicates the social values I want my son to embrace.

For example, I’ve never been comfortable with using material goods – whether it’s a present, food or money – as a means of discipline. Sure, teaching a kid to work towards a reward is great, but telling a child that a magical being will bring them a toy if they behave isn’t in line with my form of behavioural management (either as a teacher or parent). Thankfully in modern Australia, I can avoid the need to deal with the traditional European contrasting figures of Black Peter or the devil Krampus, who physically punish or kidnap those who are naughty.

Secondly, the ‘magic’ of Christmas is tangled with the joy of a stranger bringing them something. Less than being about family and the joy in engaging in gift exchanges or cooking together or visiting friends or relatives you don’t get a chance to see often during the year, the excitement is more about the supernatural transportation of loot into the living room on Christmas Eve.

If it sounds like I’m staunchly anti-Sinta Klaas, you’d be half right. There is a side of me that feels that there are potentially useful lessons that can be communicated via the Nick narrative. For example, unlike most religious beliefs, it is one that is traditionally accepted as having an end point. Nobody expects adults to continue to believe that a fat, bearded elf will bring them white goods and i-Pods care of gravity-defying hoofed mammals if they refrain from breaking the law. This ‘exit clause’ (*ahem*) provides social pressure for older children to critically consider the role and persistence of myths.

Just as I believe religious schools combined with critical thinking in the curriculum creates more atheists, it’s possible that the Santa mythos – in spite of its conflicting values – might paradoxically teach the beauty of science and reason in the face of impossible tales. Could the disenchantment of a relatively ‘harmless’ belief system act like a practice run for religious stories? Is there merit in the thought of Santa’s demise founding the way for other iconic deaths?

Maybe. But that still doesn’t make it an easy sell for me. I’m  not sure how I’ll comfortably nod at my son at Christmas when he asks if Santa is coming. I don’t think I’ll find it any easier to spin answers to ‘Is he real?’, claiming ‘He is if you believe enough’. Yes, I can turn it back on him when he asks, reflecting the query and nudging his critical evaluation in the right direction, as I would with any other curious inquisition.

But as others speak enthusiastically of this traditional jolly fellow’s nocturnal gallumphings, how loud will my silence on the matter sound? One thing I’m sure of; on Christmas morning in years to come, when he is old enough to appreciate it, there will be at least one gift under that tree addressed to my son that isn’t labeled ‘from Santa’.

Published in: on December 16, 2010 at 10:57 am  Comments (2)  
Tags: , ,

Know when to hold ’em…know when to fold ’em…

Pillory

Nobody look at this man!

The Streisand Effect: A phenomenon whereby action undertaken to suppress or deny the dissemination of information directly causes said information to spread further than if no action had have been taken. Named after singer and actress Barbara Streisand, whose attempts in 2003 to legally coerce a photographer and website to withdraw a picture of her mansion from public viewing backfired, resulting in it quickly being shared across the web.

While the internet has provided better tools for this unintended consequence, it is not necessarily a feature of technology. Much as the old idiom ‘don’t think of a black cat’ will probably inspire a greyscale image of a kitty to pop up briefly in your mind’s eye, any effort that is made to prevent information from being passed from one source to another risks attracting attention. Time and again people protest movies or literature out of a wish to censor them from public access, only serving to fan the flames of an audience’s curiosity. The lure of the forbidden fruit, and all that…

Amazon is currently being chided over a certain book on pedophilia. I won’t bother linking to it, or the news stories, in the name of avoiding hypocrisy, but it’s not difficult to find. Ironically, I would have no idea this book existed if not for the waves of protests and abundance of news items on it. I wouldn’t be at all surprised if Amazon pulled the book under the barrage of complaints, but in the meantime a crappy, self-published piece of nonsense has reached the ears of even more potential buyers. Had it sat quiet and ignored on Amazon’s shelves, it would have gathered dust as most other low-profile vanity press publications do.

Choosing our battles and knowing when to hold our tongues is a difficult thing to do. It’s hard not to succumb to certain socially infused reactions to that which disgust us and point to it while telling people not to look. Similarly, when faced with lunacy or nonsense, we find it hard not to argue with it, contributing further to its dissemination while doing little to suppress its influence. Yet some fights are best won by walking away and attacking it from a more strategic angle.

Published in: on November 12, 2010 at 8:48 am  Leave a Comment  
Tags: ,

The moral objective

Laws against murder are so common across diverse cultures, it’s hard to not think of it as a rule embedded within our genes. Even in times of war, we’re more attuned to bluff and posture than to murder humans labelled as our enemies. In the Vietnam War, it was calculated that there was only one hit for every 52 000 bullets fired. Yet is this aversion to killing the same as a scientific law? And if so, can morality be justified by such rules of nature?

There has become a trend in recent years to view morality less as a product of our cultural heritage and more of a behaviour that benefits our survival as a species. Some moralistic concepts are easy to tie in with evolution – a species that is comfortable with wanton, indiscriminate killing of its own individuals might not be as fit as one that isn’t. Incest creates a ‘yuck’ factor in so many societies that could imply a fundamental aversion to the problems associated with inbreeding. Yet one need only look at cultural relativity and the deviations in morality to see there is far more to the question than genetics is capable of explaining.

At first glance it seems to be the old nature vs. nurture dilemma. However moving past the dichotomy, one is left pondering the extent to which variations in genetics might determine a community’s moral values, and explaining how morality can vary so significantly over just a few generations.

Numerous philosophers have proposed universal systems of morality throughout history. Plato maintained that it was possible to consider something as virtuous based on its metaphysical form, or ideal concept. The German philosopher Immanuel Kant objectified morality by describing a concept called the ‘categorical imperative’, where similar to the golden rule, a person should act in the same manner they would expect of anybody in the same situation. Other systems are inseparable from religious opinions, deferring to a supernatural entity for a system of moral laws.

Using science to explain not just the role of moral behaviour, but to quantify and evaluate it, barely dates back a century or two. Charles Darwin considered altruism in animals as an evolved social trait, but struggled to describe how it might benefit individuals within a group. Yet fellow biologist Thomas Huxley expressed doubt on the issue in his 1893 book, ‘Evolution and Ethics’, claiming that the prevalence of both moral and immoral behaviour makes it difficult to objectively ascribe one with greater benefits via reason alone. He writes, “But as the immoral sentiments have no less been evolved, there is so far as much natural sanction for the one as the other. The thief and the murderer follow nature just as much as the philanthropist.” Similarly, the Scottish philosopher David Hume warned against confusing what ‘is’ – as we observe it – with what ‘ought’ to be as judged by reason.

There are those who believe it is possible to scientifically derive how we ‘ought’ to behave. The writer Sam Harris proposes that the intrinsic goal of morality is to promote the ‘flourishing of those of conscious minds’. In other words, ideally our morals should lead to the sustaining of our sense of personal and communal wellbeing, thus allowing our community to persist and possibly grow. He argues that science can indeed be applied to analysing morality.

At its core, morality is about values, which in themselves are a hierarchy of desires as perceived by an individual within particular contexts. For example, I can value money over food, unless I’m starving. I might value the idea of a family as a core unit of society, but only if it’s defined on the basis of a heterosexual couple. On the other hand,  science deals with the absolutes of facts, which don’t vary with subjective contexts and are required to be worded to reflect this.

Harris argues that values can also be worded in a factual manner, in that it can be factually demonstrated that some people desire happiness or good health. This, in turn, provides a means of quantifying the moral behaviour.

I’d venture that few people would argue that this was wrong, at least within some contexts. I might believe in treating other people well because I can then expect them to treat me well in return. The so-called golden rule can improve interpersonal relations. Any person’s behaviour can be evaluated against the probability of attaining their fact-based desire. If science can be used to determine a probable contradiction, it’s possible that the moral behaviour can be viewed as flawed.

But not all moral behaviours have such explicit links. If I argue it’s wrong to abort a foetus, do I do it solely because I think the foetus might feel pain? Is it because I fear for a slippery slope? Is it because I have a blanket rule about living systems? Do I derive it from a fear of God’s punishment? Most important of all, do I engage in this moral behaviour out of a deliberate attempt to achieve a clear goal, or is it simply something I’ve inherited from my community? We might invent an outcome, but there’s no guarantee that the moral behaviour has any clear intention.

Neurologically speaking, the evolution of our behaviour as social animals could definitely explain a tendency to develop a culture of morals, arising from the same tribal tendencies as many of our thinking behaviours in order to create a cohesive social structure. In one sense, murder within one’s tribe can be considered antagonistic to a biological law of nature, drawing a line between a scientifically determined rule and a moral code on how we ‘ought’ to behave.

By the same token we can state we have a moral obligation to have sex, not pollute our environment and encourage diversity within our gene pool, so long as we observe the context of community wellbeing. We can categorically describe certain behaviours as universally ‘good’ and others as fundamentally ‘bad’ in an absolute context of the health and wellbeing of the collective.

But what of communities which demonstrably contravene such laws? What of tribes that engage in ritualistic sacrifice, kill enemies or commit infanticide? Within such contexts, it seems that our psychology not only allows it, the action doesn’t appear to be necessarily detrimental to the continued existence of the group. Rape, genital mutilation, oppression of certain classes or castes…all could be argued on the back of an evolutionary appeal to be morally sound given such communities persist and certain groups within the community personally consider their overall wellbeing to have been improved by it. The Mayans, known for killing their own in dedication to their deities, did not die off as a direct result of this particular cultural practice and viewed their lives as improved as a result. There is little evidence of infanticide committed in cultures such as ancient Greece and Rome having a negative impact on the wellbeing of the community; if anything, the practice in many communities, especially those of prehistory, might be considered to be beneficial in negotiating times of hardship.

Harris tends to gloss over the subjectivity of well-being as it varies between individualist and collectivist communities, arguing that there is a spectrum of ‘brain states’ which all people could agree constituted good and bad. Without the means to test this, we can only disagree. Yet it wouldn’t be difficult to find anomalies for concepts we would readily assume fell into this ‘spectrum’. Of course, it might be said that such anomalies were necessarily mad, deranged or fools…yet this begs some circular reasoning. Not to mention an ignorance of cultural contexts that can put subtle variations on how we each interpret something as ultimately a good sensation or a bad sensation (hair shirt, anyone?).

In addition, he presumes that through careful consideration, morals can be viewed as absolutely right or wrong in a context of whether it is conducive to the flourishing of a conscious mind. He feels with time, science will overtake religion as a means of determining which behaviours are moralistic. Paradoxically, what of evidence suggesting that a community of religious faith carries personal and communal benefits? Arguing that a belief in a god is not scientifically moralistic could be detrimental if we rely on Harris’s argument, even if the intended action behind the behaviour is unsupported.

Academically, attuning the concept of morality to concern behaviours benefiting evolutionary success or even the wellbeing of an individual or a community is feasible, even if it in itself is a moral value by definition. Practically, however, if we’re to take a position of evolved morality, it has never had academic foresight, operating under selective conditions that see corrupting morals die off as the community crumbles. For some people, forcing these progressive mechanics of human morality to subscribe to an audit of reason could be like using your moped to pull a semi-trailer. The engine simply isn’t compatible with this task. Morals evolve under social forces, not isolated personal reflection.

Historically, nature has blindly maximised the odds. Greater diversity makes for more rolls of the dice and a greater chance of another generation of life persisting. Estimating which behaviours are most compatible with evolution by using science is more akin to the frequency matching preferred by our brain’s left hemisphere. Given enough information, it might be possible to bias the odds and determine which morals are truly ‘good’ and which are ‘bad’ for the survival of the community or the happiness of a single person. But is this truly the same as morals that are objectively right or wrong?

Even if we can evaluate our morals accordingly, and presuming this value in its own right is indeed desired, we’re still left with brains that resist the urge to abandon old moral codes under the whim of reason. What might be an academically negotiation isn’t necessarily a pragmatic one.

Science can supply information that could influence our choice of moral behaviours, of course. A person might support the death penalty, but only if they’re confident in the guilt of the offender. They could be persuaded to circumcise their child if they could be convinced that the benefits outweighed the risks. Drug use might be tolerated, but only if it’s concluded that the chance of physiological harm is minimal.

Should we allow people to choose the moment at which they will die? Is it right to allow a woman to abort their own foetus? Do the rights of the many truly outweigh the rights of the few? Each can be associated with wellbeing in some context, but only if morality is shoe-horned into a tight definition.

Of course, far from being distinct tribes defined by an exclusive set of values, there is significant diversity of values within any community. A scientist can be religious, holding rational values that inform some decisions and spiritual values that inform others. Conversely, a priest can be against abortion as it contravenes what they believe to be a religious law, while use science to support their belief in recycling waste. Drawing a neat line around any one group of individuals and defining them by exclusive sets of values is nigh impossible, given we all belong to multiple tribes.

As such, we commonly confuse how we ‘should’ behave with how we do, conflating morals and an appreciation of science in an effort to justify our beliefs. When that happens, we can be quick to mislabel a belief as scientific in an effort to satisfy any conflict between our spectrum of personal and communal values.

My arguments aren’t exactly novel. Sean Carroll does a better job of outlining them here, and Harris offers a strong (but, in my opinion, still insufficient) rebuttal here. And the discussion is one well worth having. Yet at a time when atheists are eagre to challenge how society views religion, risking the promotion of poorly reasoned conclusions in an effort to convince the public how the faithful don’t have a monopoly on morality isn’t an advisable strategy.

Published in: on October 15, 2010 at 1:35 pm  Comments (2)  
Tags: , ,

There’s no ‘I’ in ‘hive’

Back in 2004 I spent my Christmas break at a seaside town in Cornwall. The damp British winter had kept the beaches rather quiet, which was fine by me. The scenery was stunning, even if I wasn’t tempted to go for a dip in the surf. One memory that sticks out is sitting by the cliffs with my girlfriend, watching a flock of small birds ride the air currents against the sunset.

I’m fascinated by swarms of anything. Researchers from Hungary have recently applied models that describe the behaviour of particles to understand how flocks of birds move as a single unit. The collective power of honey bees aggressively defending their nest against a Japanese hornet using nothing more than their body heat still leaves me amazed. Ants marching to their eventual demise in a spiral of death, commanded by a simple rule of ‘follow the leader’ serve to show what happens when nature suffers a glitch.

Yet when I stop to think about it, my fascination with collective behaviour is based on an a rather arbitrary delineation based on individual units. I don’t think of my genes as individuals, nor my cells. Yet I’m more or less a colony of membrane-enclosed entities that can be thought of as cooperating components. My brain draws a line around the outside of my skin, so I think of all similar lines as boundaries.

A single bacterium is an individual, while a colony isn’t.  Once it has a specific role – once a cell differentiates with respect to the others – the collective is suddenly an individual again. However a bee with physiological and behavioural differentiation within a hive doesn’t make us view the hive as an individual. It’s still a single bee.

It might seem rather pedantic to give this much thought, but when it comes to understanding behaviour – especially that of humans – I feel the innate biases we hold about the concept of the individual can lead us astray. We’re so hung up on concepts of free will and theory of mind that the idea of being influenced by the collective to any significant extent feels somehow heretical. It’s easy to see hive-mind behaviour in others, of course…but never ourselves.

That’s not to say we’re at the complete mercy of our social group. But how we behave as individuals can’t be easily removed from the context of a collective either.

The true beauty of swarms, hives and flocks is that relatively simple rules can create the illusion of something complex, like a Mandelbrot set giving rise to a piece of fractal artwork. Of course, as a line of ants following the circular pheromone trail to starvation discover, all rules have their limits. Finding ours is a glorious challenge that is necessary not just for us as individuals, but for us as a species.

shttp://www.youtube.com/watch?v=mA37cb10WM
Published in: on September 17, 2010 at 9:58 am  Comments (1)  
Tags: ,