This sounds like the setup to a joke, but I really was just minding my own business on a bench in Washington Square Park when I was approached by a perky young woman with an iPad and some pointed questions about New York’s proposed ban on sodas larger than 16 ounces.
“I don’t think anybody needs that much soda at one time,” I said, smugly. Science, after all, was firmly on my side, and I wasn’t going to give the light of day to whoever was paying this person to do push-polling in the park.
Sugar is bad for you. Lots of sugar is really bad for you. And 16 oz. of soda is really an irresponsible amount of sugar. I don’t like Bloomberg telling me what to do, but I just can’t bring myself to come to the defense of an industry that runs on corn subsidies, relentless marketing and humans’ primordial proclivity for flavors associated with high caloric density, and produces nothing of any real value. (Although, see Alva Noë on the value of pleasure, which is interesting, because whatever havoc soda and non-foods like Hot Cheetos and Takis might be wreaking on our bodies, you can’t deny that they bring immeasurable pleasure.)
As people who have read other pieces here will have noted, claims to authority based on the fact that an argument is derived from “science,” that fail to acknowledge the state of the science as fluid, provisional and often a bit of a mess are a pet peeve of mine when it comes to cognitive neuroscience. So when I discovered this piece covering an obesity conference, it sort of changed my mind about NYC’s giant soda ban, or made me realize that the situation was way more complicated than just “Science vs. the Sugary Beverage PR machine” (BTW, what is up with this bit of propaganda, showing a distressingly fit silhouette holding aloft what appears to be a relatively modest 8-12oz. cup of soda? and how did the beverage industry, which is so good at messaging, fail to come up with something as effective as the glass of human fat?).
The story about the obesity conference manages to convey the complexity of the issues and the mix of intellectual, altruistic, political, and careerist motivations that drive scientific debates. It also pointed the way to a really interesting article about the unintended consequences of altruistic motivation in reporting scientific results. I grumble a lot about how more sinister interests interfere with scientific ethics. It’s more complicated than that though, of course, and it’s interesting to consider how people can make a hash of science because they think they’re doing the right thing.
A mild form of this would be, say, not correcting people when they take a random, unseasonably warm day as evidence for human contributions to global warming. Rather than say “why yes, there is remarkable consensus among scientists who are not being paid off by the energy lobby that humans are causing global warming, but of course any one warm day could just be a coincidence”…it’s easier to just agree. We know from Kahneman and Tversky’s work that people are likely to be unduly influenced by small, but salient bits of data like random warm days than by things like scientific consensus anyway, so being a stickler about what counts as good evidence in this case is counterproductive, and anyway makes you seem like a science SNOOT.
From the perspective of doing good science, though, running a hundred correlations on a data set and reporting the one significant relationship you find because it supports some public health goal that you think is worthy — say, a relationship between sugar consumption and body mass index — is not actually better than doing the same thing to show that some pharmaceutical you are hoping to sell can be used off-label for some highly prevalent condition.
Anyway, back to the park bench, what I should have been doing, for the purposes of keeping this story going, is reading Against Method, Paul Feyerabend’s wild and woolly deconstruction of the scientific method, which I had found on a stoop near my apartment a few weeks earlier. But honestly, although I don’t remember what I was doing there, I do know that I did not crack open the Feyerabend until a few weeks later, so I couldn’t have been reading that.
This piece calls Feyerabend out as “the greatest enemy of science.” The logic behind this claim is that, as a self-proclaimed “anarchist epistemologist,” he helped create an academic climate that was actively hostile toward science, insisting that it be treated as one of many ways of understanding the world and has no intrinsic superiority over mythology, folk wisdom, fairy tales, religion, or fantasy. His procedure is to lay out some common reasons that are often given for insisting that science deserves an exalted place among epistemic traditions — its dependence on facts, its rationality, its insulation from the foibles of human fallibility — and then show that none of these things really describes how science is done. He further says that eructations of illogic, willful ignorance of facts, and messy human interactions are not just little imperfections that creep in at the fringes of science, but that they are properly part and parcel of how science works.
I am sympathetic with other scientists’ impatience with this kind of theorizing. When I was about the age of the push-poller with the iPad, approaches that, like Feyerabend’s, assert that the hegemony of science in epistemological matters is arbitrary anti-human had established themselves as a kind of orthodoxy in the humanities. The result was something that is actually discussed at length in Against Method:
John Stuart Mill has given a fascinating account of the gradual transformation of revolutionary ideas into obstacles to thought. When a new idea is first proposed it faces a hostile audience and excellent reasons are needed to gain for it an even moderately fair hearing. The reasons are produced, but they are often disregarded or laughed out of court, and unhappiness is the fate of the bold inventors. But new generations, being interested in new things, become curious; they consider the reasons, pursue them further and groups of researchers initiate detailed studies…There comes a then a moment when the theory is no longer an esoteric discussion topic for advanced seminars and conferences, but enters the public domain. There are introductory texts, popularizations; examination questions start dealing with problems to be solved in its terms…
Unfortunately, this increase in importance is not accompanied by better understanding: the very opposite is the case. Problematic aspects which were originally introduced with the help of carefully constructed arguments now become basic principles; doubtful points turn into slogans; debates with opponents become standardized and also quite unrealistic, for the opponents, having to express themselves in terms which pre-suppose what they contest seem to raise quibbles, or to misuse words…Thus do we have success — but it is the success of a manoeuvre carried out in a void, overcoming difficulties that were set up in advance for easy solution…A wonderful invention has turned into a fossil.
When your revolutionary idea is that “anything goes,” and you train students to treat everything as a text to be deconstructed, there are bound to be some unfortunate side effects, like a journal that publishes a paper that contains no actual content, devised by a physicist as a hoax. (Note, this has also recently worked on a math journal, but one that seems to exist only to collect publication fees, not one that actually has standards of peer review, or is read by anybody.)
But I think we can read Feyerabend — against his will, perhaps — as a kind of loyal opposition rather than an out and out enemy of science. A lot of the tactics Feyerabend identifies — counter-induction, ignoring the consistency principle, studying discredited theories and mining them for parts — are very productive, and it’s nice to have names for things that scientists do all the time. I find myself, reading Against Method having to review my own arguments for why I prefer, say, Darwinian evolution to Genesis. It’s because science is largely animated by an ethic that values good argument over appeals to authority. How this ethic plays out in academic science is a ripe topic for sociologists and philosophers, because it’s manifestly not true that appeals to authority never trump good argument in practice, and of course there are a lot of positions about what makes a good argument.
Further, if we want to follow Feyerabend and friends down the rabbit hole, we can ask whether there is any way to actually prove that an ethic that highly values good argument necessarily produces better knowledge than one that prioritizes appeals to authority — especially since we can’t even provide a coherent definition of what counts as a good argument. But this is like arguing about holes in swiss cheese, or, if you want to see a worst case scenario of what happens when the standards for what makes a good argument are allowed to float freely, trolling the internet about it.
The important question to ask is whether you’d prefer to live and work in a culture that values argument over authority or the other way around. This is why any erosion of scientific ethics is so alarming. And why I’m less certain about my support for the giant soda ban.