The dangers of “slippery slope” arguments against pseudoscience.

Written by . Filed under in the news. Tagged , , , . Bookmark the Permalink. Post a Comment. Leave a Trackback URL.

 

It is easy to get scientists riled up by staking out an “us” vs. “them” position in which “we” are the rational, methodical people who value empirical evidence over intuition whereas “they” are the ideological, unreasonable traditionalists willing to go with their “gut,” whatever the facts may be. As I write, we’re in the midst of a monumentally irrational government shutdown, which is having some horrifying acute effects on scientific research in NIH’s intramural facilities. And that comes in the midst of a lengthy sequester, which came after a years-long hiatus in growth of the federal budget for research and a reorientation away from basic curiosity-driven research which I’ve discussed here at length.

So I understand why a lot of my colleagues seemed to like this polemic against pseudoscience in the Times. In it, Massimo Pigliucci and Maarten Boudry argue that indulging any form of pseudoscience can be a “dangerous gateway to superstition and irrationality.”

This is an oversimplification, however, and one with its own perils. The piece starts with three arguments for why the “Demarcation Problem” is vital. First, there is the epistemological issue: what kinds of things should count as knowledge? Then, there is a what they call a “civic” issue: what sorts of things should we spend money to research in our labs? Finally, there is what they call the ethical issue: how should we make decisions about things that impact our own health and that of the public at large?

The rest of the article does not say anything directly about the first two issues, but repeatedly hits the panic button on this last one. The authors warn, for example, that “if you take folk herbal ‘remedies,’ … while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.” Also discussed are a few genuine public health catastrophes, like the mass failure to vaccinate due to fears about the links between vaccines and neurological disorders and failure to treat HIV because of bad information about its relationship to AIDS.

These are graphic examples, and they make the potential danger of pseudoscience very concrete: Personal and public health decisions made on the basis of bad data on the safety and efficacy of particular treatments kill people. These are huge problems, and I don’t wish to trivialize them. Further, I agree with what the authors imply: that where there is available, reliable and convincing empirical about the safety and efficacy of a specific treatment, it should be preferred over things that don’t have this kind of evidence backing them up.

Pigliucci and Boudry are critiquing an article in which Stephen Asma recounts some stories of his encounters with traditional Chinese medicine, and suggests an ecumenical approach to wellness. It is pretty clear that the “pragmatic” approach Asma advocates is unlikely to lead to the kind of bad health decisions Pigliucci and Boudry are concerned about. Indeed, the pull quote from his piece laments that “we are all living in the vast gray area between leeching and antibiotics,” implying that his pragmatic approach includes taking antibiotics, because there’s good evidence that they can save your life under those circumstances. But in cases of chronic pain that western medicine can’t do anything about, or, say, a bout of the common cold, it is not crazy to seek out folk remedies that have at least indirect evidence for safety and efficacy, especially if you are aware of the placebo effect and its often overlooked cousin, the Hawthorne effect.

Pigliucci and Boudry want us to be “scientific” about how we make health decisions, but they give impractical guidance about how to do this. The only positive example they provide of a scientifically justified remedy is aspirin. But almost none of the health advice we get is as well supported as the advice to take aspirin for acute pain. You can’t solve the demarcation problem by lumping something you don’t like together with a bunch of practices likely to strike readers of the New York Times as obviously crazy and dangerous. You can’t say “don’t drink turtle blood for a cold because that’s obviously crazy, but take aspirin for acute pain because it has been proven to be safe and effective, and we have a very well-understood biochemical model for how it works.”

Say I read an article suggesting that omega-3 fatty acids are potent anti-inflammatory agents and seem to have some positive health effects, and then as a result decide to include more sardines in my diet. I do not think Pigliucci and Boudry would put me in the same category as someone who decides to avoid foods with “hot” Qi properties to reduce fatigue on the advice of an herbalist.

But as a scientist I should be aware that many interventions that look promising in the lab have little to no impact on human health in real life. Indeed, if the meta-analyses are correct, many drugs and therapies that are in wide use have weak empirical support. And if we consider the very small proportion of drugs that pass from clinical trials into common use, it becomes clear how error-prone the process of extrapolating from “bench to bedside” can be.

So it is not entirely rational for me to believe that I am doing something good for my health by eating sardines, given the many gaps that would need to be filled between the available studies and sound, rational, scientific advice at the standard of taking aspirin for acute pain. On the other hand, I find sardines delicious, especially on a bed of arugula dressed generously with garlic, balsamic and olive oil, and topped with a fried egg. And there’s some reason to believe they might be good for me. So I will keep eating them. Does this put me on a slippery slope toward witch burning or ritual animal sacrifice? I hope not.

What about megadoses of antioxidants? Here we may be more clearly in the realm of the pseudo, but if you think that being a scientist immunizes you from this kind of behavior, you should read the story of poor Linus Pauling, the double Nobel laureate, who advocated overdosing on Vitamin C as a way to prolong human life, until he died of a Vitamin C overdose.

Even here, the pseudoscience monicker doesn’t fit into place with the same satisfying “click” that it has for the turtle blood. There is, after all, a mountain of data demonstrating the role of oxidative stress in senescence and death at the cellular level. So it was not crazy, or evenpseudoscientificto reason that large doses of antioxidants might have some protective effect at the level of the organism. But it was wrong. The way these compounds function in the organism at large turns out to be more complicated.

So what are the differences between these examples and the examples of pseudoscientific health practices cited by Pigliuccii and Boudry? Importantly for the rhetorical force of their argument, none of these things have the same “ick” factor as drinking turtle blood mixed with baijiu (or, really, anything mixed with baijiu) and they are unlikely to strike most New York Times readers as superstitious and unscientificon their face. But the more important difference is that the science behind how different nutrients function in cells, and in the body, is on a pretty firm footing empirically. It is, however, incomplete, and contradictory in places, and the data likely to be most relevant to making health decisions — data about health outcomes for people who consume different amounts of particular nutrients over long periods of time — will never have the same inferential power as double-blind placebo controlled clinical trials. This makes it very hard to extrapolate from laboratory results to sound advice about how people ought to behave in the real world.

So we all make decisions about things that impact our health all the time: what and how much to eat, what kind of exercise to do and how much, whether to get off the computer a few hours before bed, etc. To the extent that we make these decisions based on consideration of our health, they are necessarily based on limited information, and it’s not always clear what to count as a “rational” decision. Under those conditions, we are all pragmatists to some extent, and this, I believe, was Asma’s point.

There is something else important here: the discussion so far is concerned entirely with people’s health decisions. But science is not a machine for generating sound health advice. Beyond providing the raw materials for practical innovations, key parts of the scientific enterprise include finding interesting things about nature to explain, and coming up with ways of testing how these things happen. Dismissing other knowledge traditions out of hand is not very helpful for either of these functions. There’s nothing inherently dangerous about exploring theories, or even inchoate ideas, based on knowledge traditions that don’t share the same standards of evidence as science. In fact it can be good for science to operate in this way.

Even Pigliucci and Boudry allow that some aspects of “pseudosciences” have been incorporated into science, although they’re vague about how this process might have worked. It must have included, at some point, a willingness to take those other knowledge traditions seriously, and try to formulate hypotheses based on them, and to then test those hypotheses in repeatable and convincing ways. But if contemporary scientists had taken the position that any dalliance with, say, astrology or alchemy were likely to lead them down a slippery slope toward whatever counted as irrational superstition in their time, astronomy and chemistry may have taken much longer to invent.

Consider how the deeply flawed and essentially racist pseudoscience of phrenology is intimately connected with modern cognitive neuroscience. Our arguments about cortical localization today have little in common with Gall and Spurzheim’s manual relating cranial features to personality traits, but it lurks there in our intellectual family tree like an embarrassing great aunt, as an early form of faculty psychology, and an early motivator for some of the foundational experiments in the field.

The problem for Pigliucci and Boudry’s argument is that they want to have it both ways. They want to say that the scientific method allows for the “integration of willow bark and turtle blood, provided they hold up to scrutiny,” but they also want us to join them in dismissing the concept of Qi as vacuous based on our own presumed prejudices and a very sloppy argument in which they: 1) state that the concept is unfalsifiable and then 2) cite a single null result as “seriously undermining talk of meridians and Qi lines.” Seriously? Which is it? Is Qi an unfalsifiable construct, or is it false? It can’t be both, unless there’s some subtlety to the definition of “unfalsifiable” that’s escaped me.

If there is something of scientific value in canon of traditional Chinese medicine that has accumulated over centuries of systematic study, and the body of thought connected to it, how would we ever find it unless we engaged seriously with this “pseudoscience”?

This gets, finally, to the first and second questions at the start of P&B’s polemic. As to the epistemological issue, I will grant that TCM is not “scientific,” but it is going too far to say it is not “knowledge.” To suggest otherwise is to engage in the kind of argument by appeal to authority that makes our colleagues in the humanities stamp their feet and occasionally write profanity-strewn blog posts.

As to the “civic” question, of whether we ought to invest in studying things in the lab that have their roots in pseudoscience, I would offer that we can’t make these judgments based on bright-line distinctions, any more easily than we can make personal health decisions based on the scientific literature. We don’t have a failsafe system for figuring out where new scientific knowledge is going to come from, and distributing funds accordingly. The system we do have would not necessarily be improved by inclusion of official guidelines specifically defining “pseudoscience” and excluding it from consideration. The demarcation problem is a vital problem, but it’s also a naggingly difficult one.

I don’t know why this particular article got me worked up enough to break a months-long (and unintentional) blogging hiatus. I don’t have a particular commitment to or interest in traditional Chinese medicine, although I do like a good qi gong tui nasession, but who doesn’t like a massage every once in a while? Maybe it’s because I am going to Tibet in a few weeks, and on the advice of every literally every Chinese person I have spoken to about going to the Himalayas, I’ll be taking hong jing tianpills as a prophylactic against altitude sickness. I will do this without checking PubMed for an efficacy study. If Pigliucci and Boudry are correct, this is a particularly dangerous move, because it will soften me up to all kinds of superstitious mumbo-jumbo just as I am about to visit a place with a deep and rich spiritual tradition. I would be more concerned about this if Tibet weren’t also the home of mindfulness practices whose mental and physical health benefits are just beginning to be understood by scientific research.

Image credit: Monty Python’s Holy Grail, obvs.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Follow

Get every new post delivered to your Inbox

Join other followers: