Breaking the imaginary fence

Broken brainAt first thought, Amy Winehouse and Anders Behring Breivik have very little in common, other than having their names dominate headlines as front page news for a period in late July 2011. Yet for both of these individuals, the question of mental illness and culpability are significant in  the public discussion on their recent actions.

When Amy Winehouse died on July 23, no cause of death had been established. Nonetheless, her recreational use of drugs became the focal point of public discussion – either directly presumed to be the agent of her passing, or debated circumstantially as a contributing factor. A conflation of blame and pity whispered through the global community, ranging from outrage that more wasn’t done by her nearest and dearest to sadness that it was unlikely salvation of any sort was possible.

When Breivik carried out a mass murder in Oslo on July 22, no motive had been established. Yet bit by bit, news of his political leanings leaked into the public sphere, and soon a profile was being constructed like a psychological jigsaw puzzle that sought to explain how an individual could commit such a vile act.

In each case, the term ‘mental illness’ was thrown about. Sometimes flippantly, sometimes with all of the expertise an armchair psychologist can muster. If not these words, then synonyms were applied. Insane, mad, fucked-up, cracked, lunatic…it goes on. In each case, the actions of these two individuals were deemed to be the work of broken brains.

Conversely, others have found the flipside of this assessment relatively uncomfortable. If they had a mental illness, then their culpability was reduced. If Amy Winehouse’s addiction was a disease, it would suggest she wasn’t fully responsible for her drug abuse. Anders Breivik’s insanity could mean he wasn’t fully responsible for his brutal homicide. After all, aren’t diseases pitiful things?

The dichotomy of broken and normal neurology is deeply embedded in our sociology, paradoxically as both a form of reduced culpability and as a way of discriminating. You’re simultaneously pitied and expelled from the community. Meanwhile, a distinct line is drawn, creating a border between those with acceptable cognitive functions and those whose behaviour is functionally anomalous.

The problem here lies with the concept of blame, which is wrapped up in consequences of retribution and punishment. If Amy Winehouse didn’t have a disease, her decisions led to an outcome that was ‘deserved’. If Breivik isn’t insane, he should suffer severe discomfort for his actions. In neither case is there a pragmatic solution being offered. Rather than trying to understand the minutiae of influences and causes that lead to drug abuse or extremist ideologies, mental illness becomes a simplistic term used to isolate one’s self from undesirable behaviour.

In the end, mental health is the real loser. Variations in brain function that make social interactions difficult, dangerous or uncomfortable for individuals are turned into caricatures that serve not to provide potential solutions, but to make others feel better about their own positions.

Not being a psychologist, or having access to any of Amy Winehouse’s or Anders Behring Breivik’s medical history, I cannot diagnose either one of them. I cannot attribute terms or associate labels to catagorise their behaviour. But I do know that as humans, they experienced variations in neurology that produced actions that are – for lack of a better term – regrettable. In the end it doesn’t matter much as to which side of the imaginary mental health divide they fall on.

Advertisements
Published in: on July 26, 2011 at 4:52 pm  Comments (1)  
Tags:

The Others

 

Apothecary

Medicines stupid people use (nb., I'm not one of them).

“How can skeptics have a dialogue with homeopaths?” Michelle asks that modern well of insight and wisdom, ‘Yahoo’. “[W]ithout pointing out the stupidity of their arguments? I’m thinking about the paranoid ramblings about big pharma as well as the ignorance of simple science.”

Ignoring for a moment the framing of Michelle’s query, I was interested to scan through the responses for a solution two centuries of debate on the topic might have overlooked.

“Crucially, homeopaths lack the educational level to understand how their potions can only be water,” says Dave, a top contributor. Another top contributor says, “They only start with the fallacies to avoid providing evidence – so no matter what they crap on about, keep dragging them back to evidence.”

“Never argue with an idiot, they’ll drag you down to their level and beat you with experience,” says Flizbap 2.0.

And on it goes. There are some that advocate avoidance of engagement without resorting to well-poisoning or flippant antagonism, but for the most part the advice involves engaging in a behaviour anthropologists and other social scientists refer to as ‘othering‘.

Regardless of the intentions, the practice involves judgments of inferiority or impeded progress based on observations of contrasting social beliefs, behaviours and values. It is born of ethnocentrism, where observations are made with the assumption that one’s own experiences define what is objectively desirable. The result is a sense that a group of people, ‘them’, is inferior to one’s own company, or ‘us’, on account of differences in beliefs and values.

By the dawn of the 21st century, however, ethnology has had enough of an influence on the developed world that it’s become difficult to ‘other’ non-local cultures without seeming naïve or xenophobic. Most people have come to see that subsistence farming or hunter-gathering is not a mark of inferiority or low intelligence, and limited technological dependence is a socioeconomic issue rather than a cultural or cognitive failing. Openly claiming a village in the Papua New Guinea highland is ignorant, stupid or indulgent in logical fallacies would probably raise eyebrows, leading such discussions on cultural practices to be couched in less derisive terms. While the debate over racial intelligence might continue, it’s harder to find people who justify their beliefs by pointing out contrasting traditions, lifestyles or cultural practices.

However, within national borders, ethnocentrism returns with all of the ignorance of our colonial ancestors. If it’s one habit we can’t seem to shake, it’s that our nationalistic heritage has embedded in us a strong correlation between culture and country, as if by being white and sharing an accent our cultural values must be homogeneous. As a result, othering occurs far easier with those who appear (at least superficially) to share an ethnic background.

What’s missed is that within our own community there are shades of culture and sub culture that pool, ebb and overlap. Healthcare is just one example, yet one that has significant consequences beyond other examples of cultural behaviour such as art or language. Medicine in the context of a scientific product leads many to interpret healthcare as a ‘culture without a culture‘. Science and medicine is typically presented as timeless, truthful and above all, objectively correct. It’s strictly biophysical, with its sociocultural component reduced to a vestigial nub.

As such, it’s far easier to other those who demonstrate contrasting medical behaviours. Lack of intellect or education can be easily held up as reasons for their alternative beliefs without evidence, as it’s assumed that all else must be equal. As such, archaic and empty solutions such as ‘better education’ or legal enforcement is suggested as a way of making people see sense.

In truth, there is a multitude of reasons why people use alternative medicines, few of which (if any) have much of a direct link with a level of education or cognitive deficiencies. Rather, values in what constitutes good evidence, familial traditions, cultural identities and distrust of contrasting sociocultural groups play far greater roles in determining health behaviour than university degrees or brain function. In other words, the very same factors medical anthropologists deal with abroad when studying any other health culture are responsible for the same alternative beliefs in our own community.

The question on how best to address culture change is also just as relevant here as it is elsewhere. It’s all well and good that African or Indigenous communities retain their cultural heritage, but what does one do when it conflicts with treatments for HIV, alcohol abuse or diabetes? This is a question a good friend of mine is currently researching through the Australian National University; as you might expect, the resulting discussion demands more than a simplistic ‘they need better education’ or ‘they’re just stupid’. Yet it’s not a novel dilemma; whether it’s vaccination, water sanitation, nutrition or infant care, the question of how to effectively assist diverse communities in improving and maintaining their health and wellbeing has occupied anthropologists for years, producing rich debate and diverse results.

Ironically, those who propose answers for Michelle seem to identify as individuals who would normally value science as a way of presenting useful solutions to a problem. Why then do few seem to be informed by research? Why are the answers without citations or references, seeming to be based on little more than personal hunches or folk wisdom?

Based on my own experience, few would be inclined to look further as they already assume to be correct. Science works for some things…unless you already think you know, at which point it’s all rhetoric and pedantry. Social science is a soft science, therefore gut feelings and intuition are as useful (if not more so).

Michelle’s question and many of the answers reveal the roadblock we face here in our efforts to address alternative healthcare. Rather than treating it as a legitimate sociological question, where science might provide some insight, the issue is reduced to a dichotomy of smart vs. stupid, of educated vs. naive. When those are the questions we ask, we certainly can’t hope for any productive answers.

If you please – just don’t lie

Believe it or not...it still might make you feel better

When it comes to alternative medicine, there is arguably no greater misunderstood phenomenon than the placebo effect. It’s not uncommon to hear it feature as a defence supporting the efficacy of treatments that otherwise have no evidence for performing as claimed. It is the modicum of benefit that is proferred when it has been scientifically determined that a touch, tincture or totem has been shown otherwise impotent.

Much of the present common understanding of the effect stems from a 1955 book titled The Powerful Placebo by Henry Beecher – an American anaesthiologist who stressed the need for double blinding in clinical testing and was the first to attempt to quantify the placebo’s action. While the placebo effect is argued to be a significant confounding factor in determining useful from useless treatments, Beecher presented the placebo effect as clinical, citing data that demonstrated a percentage of patients were ‘satisfactorily relieved’ by sham treatments.

Since then, other studies have suggested the placebo effect is physiological – that in spite of having zero bioactive components, the act of treatment alone can still help improve a patient’s biophysical functioning. For example, in a 1977 study stomach ulcers were found to have decreased in size following a placebo treatment.

On such evidence, it seems that the mind truly holds sway over the body’s matter. By merely perceiving a treatment works, an individual’s biology will make a greater effort to fix itself.

Yet there is an increasing amount of evidence showing that the placebo effect – or at least, this interpretation of it – is a myth. Any influence a sham treatment has over the variables can be reduced to psychological factors, with past studies falling foul of poor methods.

However, in the very least, it still leaves room for inactive treatments to make patients believe they’re feeling better. And, given western medicine primarily concerns itself with the individual patient ‘s sense of wellbeing, it might be argued that placebos could be used to placate a patient where no other treatment is available.

Ethically, such actions are highly questionable. Few physicians would feel comfortable duping a patient into thinking they’re receiving a treatment, when in fact they’re getting a fake pill or potion. Not to say this has stopped some from administering fake treatments without patient consent.

A new study now shows we might have the wrong end of the stick altogether – that any psychological bias towards placebos comes as a result of its superficial resemblance to treatment. Giving rationality too much credence, it was assumed that a patient needed to suspect that their treatment was legitimate. Instead, it seems the same effects might arise regardless of whether the patient knows the treatment has no bioactive components, putting lie to the claim that ‘it’ll work if you believe it will’.

Ed Yong has a great write-up on the study at his blog. While the study’s method left room for plenty of questions, if its results are legitimate it demonstrates that simply engaging in medical ritual might be enough to bias a patient’s disposition towards their wellbeing. The patient needn’t be led to accept there is a biophysical foundation to the ritual – just that there is a semblance of medical intervention.

The precise limits of the placebo effect and its cultural relationship remain fuzzy. Whether it will ever be an ethical approch to medication is hard to ascertain. However, knowing that a personal rationalisation might not be necessary for a patient to benefit from medical ritual is a positive step to better understanding how the way a patient is treated can be as important as the treatment itself.

 

 

Published in: on December 23, 2010 at 8:15 pm  Leave a Comment  
Tags: , ,

Microbial melodramas make for good soap

There might be no ‘I’ in team, but there is definitely one in ‘bacteria’. While microbes aren’t traditionally recognised for their altruism, James Collins and a team of researchers from the Howard Hughes Medical Institute have found individual bacteria can pay a price that ultimately benefits others in the colony.

The text-book microbe competes in a bug-eats-bug world, where a subtle physiological variation in a select few can make the difference between a population’s survival and its annihilation. As the mutants come to represent the majority, resistance to old threats increases. Whether it is antibiotics or plain old disinfectants, chemical warfare loses its punch when a few bacterial mutants come to represent the species.

However, these new findings (Nature, Vol467, pg82) demonstrate the bacterial struggle for mutant dominance might not be so selfish. In an effort to observe how genetically identical bacteria developed the initial variations, Collins and his team subjected a population of cloned Escherichia coli to a steady gradient of the antibiotic norfloxacin. Routine tests of sample bacteria were taken to record the minimum concentration of antibiotic that would halt their growth.

On comparing the samples with their home population it was revealed that the very act of removing bacteria decreased their resistance to norfloxacin. Stranger still, individuals gifted with the ability to deal with the chemical attack were in the extreme minority, making up less than a mere one percent of the colony. Their talent lay in their ability to produce tryptophanase, an enzyme that breaks the amino acid tryptophan down into the chemical indole. Fortunately for its less capable siblings, an increased concentration of indole in the environment helps switch on useful metabolic pathways that combat the antibiotic’s effect, allowing the rest of the population to benefit.

Of course, given there is no such thing as a free lunch, the production of this enzyme requires the devotion of precious energy. ‘Kin selection’ is one explanation for this behaviour. A process first suggested in the 1960s by the evolutionary biologist William Hamilton, it suggests individuals act altruistically to increase the chance of survival and reproduction for those with a close genetic relationship.

What, then, of their nefarious counterpart, the bacterial bum? Led by Steven Diggle at the University of Nottingham’s Centre for Biomolecular Sciences, microbiologists have found that individuals within a population of Staphylococcus aureus can opt to coast along for the ride when it comes to contributing to the costs of an infection.

After deliberately infecting waxworms with the bacteria, the researchers eavesdropped on the developing colony by observing a chemical coordination process called ‘quorum sensing’. They discovered those which lacked the means to engage in this microbial forum could also refrain from making toxins, saving them energy which could be devoted instead to reproduction. In effect, these bacteria were relying on their siblings to provide them with their nutrients. Seeding an infection with these freeloaders could present physicians with a novel form of treatment.

While this could be great news for the medical world, it does present a rather perplexing contrast to their more charitable cousins. Understanding how microbes interact with their host’s environment is vital if we are to find additional ways of controlling infection. Just last year, the World Health Organisation released a warning concerning the potential threat posed by the NDM-1 strain of E. coli – a potential superbug that is on the rise across the globe. It could very well join the likes of more familiar foes such as the vancomycin-resistant enterococcus and the rather formidable methicillin-resistant staphylococcus aureus.

Paying attention to the melodramas unfolding in these microbial ‘Days of our Lives’ is certainly better than any daytime soap. What’s more, it might be the key to turning the tide on the twilight of the antibiotic.

Published in: on November 2, 2010 at 9:46 am  Leave a Comment  
Tags: , ,

A public immunity to freeloaders

SyringeImagine a dinner party where a guest openly admits they don’t work. On inquiry, they smile warmly and cite statistics concerning death or injury in the work place. ‘I just don’t want to risk it,’ they say.

The next obvious question comes up; ‘So, how do you get by?’

‘Oh,’ they say, pausing to take a sip of wine. ‘Easy. You see, enough people pay taxes to provide me with welfare. I really don’t need to work.’

Everybody smiles and nods politely, believing it’s their friend’s choice to refrain from working if they don’t want to, and they move onto other topics to do with politics and religion.

Sound familiar? No? I must admit, in spite of the numerous dinner parties I’ve attended in my life, I’ve never encountered that scenario. There’s probably a simple reason for that – most people would be embarrassed to openly confess such a thing. Who would want to say to others that they’re not willing to roll the dice for themselves, but are happy to enjoy the benefits provided by the risk-taking of others? Very few would.

Yet several years ago during a lunch outing with work colleagues I had an acquaintance openly state they chose not to vaccinate their children. Why not? Simple – they cited the risks of vaccination and stated they didn’t want to take a chance on their child’s health. Unfortunately the mood of the conversation lightly condoned their choice and even congratulated them on making such a decision.

Now, I have an infant son who is of an age where he is getting his vaccinations, and I must confess I hate seeing him in pain or discomfort. The thought of him dying is the most terrifying thought I have ever had to encounter in my life – and that isn’t hyperbole. There’s only one thing worse; if his suffering was the direct result of a decision I made.

I chose to have my son vaccinated knowing he could face uncomfortable side effects. Not only did I know this from my undergraduate degree and occupational experience as a medical scientist, but my wife and I knew because we had easy access to literature and a physician who discussed the situation frankly and honestly. I understand not everybody is so fortunate, and that there are undoubtedly those in the medical profession who would avoid discussing the potential for harmful consequences, however I find my experience as a parent hard to reconcile the concerns of groups like the Australian Vaccination Network who feel a responsibility to present an emotion-laden ‘balance’ of information to the public.

This past colleague based their decision to refrain from vaccinating their children on two notions – one was that they’d read accounts of children suffering from seizures and even dying following vaccination, and the second was that they’d never heard of children dying or suffering from the conditions that they were being vaccinated against. At least, not recently. When another colleague pointed out that vaccination could well be the reason behind such an absence of modern mortality, they in response referred vaguely to statistics that indicated death rates from communicable diseases were dropping long before public vaccination programs.

This is, in a way, quite correct. Better sanitation and improved healthcare practices during the first half of the 20th century saw to a gentle decline in deaths from diseases such as measles and whooping cough in most post industrial countries like Australia.  It’s difficult to tease out with a simple glance at a graph the precise impact of any public vaccination program from the impact of improved medical intervention, given we don’t have a control population where there is modern healthcare practices sans immunisation. The best we can do is to watch the statistics of health complications that arise from epidemics that occur in today’s world when public vaccination falters.

Unfortunately, there is no shortage of such natural experiments. From 1988 to 1990, for instance, California experienced an epidemic of 16 400 cases that resulted in 75 deaths. That isn’t in an impoverished, third world country without access to sanitation or medicine – that is in a modern, post-industrial nation where immunisation rates had fallen.

Nations with social health care systems like Australia haven’t been free of such outbreaks, either – in 1993, the Western Public Health Unit received 889 measles notifications for Sydney’s western suburbs. Ten percent resulted in hospital admissions, with one case of encephalitis. Fortunately no deaths occurred.

How does this compare to the risks taken from a measles vaccine? During the Measles Control Program in 1998, there were 89 reported reactions out of 1.78 million vaccinations – the same number as hospital admissions in the Sydney outbreak of only 889 cases of infection. These reported reactions included 8 reported rashes, 4 with inflammation of the parotids and one febrile seizure. No children died.

There are, of course, plenty of anecdotes to suggest horrific experiences of vaccinated infants. Seizures are certainly possible, and for me to experience such an event with my child would be unimaginable. But in a community where nobody took that risk, the dice I’d be rolling for my son would be heavily loaded. Even if each anecdote was verified, it’s hard to imagine the risks would come close to the chance of complications from contracting a disease like measles. I might not like the one in a million chance of my son having a seizure, or the slightly increased chance of death such a side effect could present, but the odds he’d face in a world with no vaccination simply wouldn’t compare.

Yet the community this colleague can happily raise their child in is not unvaccinated. Enough people roll that dice, so their children can appreciate good health in a community where pathogens have nowhere to proliferate. So long as a high enough percentage of their fellow citizens take that risk for them, they won’t have to take that tiny but real chance of suffering vaccine side effects.

I’m happy to shoulder that burden on behalf of any individual whose constitution puts them at a significantly greater risk of illness should they be vaccinated, just as I’m happy for the taxes I pay to help benefit those who are impeded from working. Yet for those who simply don’t like the idea that the demonstrably minuscule odds are too much for their child to risk, I feel no such obligation.

When it comes to most things in the community, I’m a process-driven rationalist. I support people making their own decisions regarding their own finances, health and well being, and choose to engage in outreach that assist them in making decisions that have the best chance possible of matching the outcomes they hope for. I oppose regulations that take that decision out of other’s hands and stand against the use of authority to deprive any citizen of the right to decide a course of action for him or her self. I believe in education over legislation…for most things.

Vaccination is a communal decision. While education remains a vital necessity in this regard, I also feel those who wish to rely on the risk I take to see to their child’s safety borders on criminal, and in the very least can be personally considered to be highly immoral. I struggle with the idea of legally enforcing vaccination, however find it even more difficult to welcome the choice of others to take for granted the protection my risk-taking has provided for them.

Needless to say, I don’t tend to go to many dinner parties any more. I stayed quiet during that lunch. As a relatively new father, I’m not sure if could manage such silence again in the future.

Published in: on July 25, 2010 at 11:14 pm  Comments (10)  
Tags: , ,