If America’s Boyfriend were a Cognitive Neuroscientist…

Written by . Filed under in the news, shop talk. Tagged , , . Bookmark the Permalink. Post a Comment. Leave a Trackback URL.

Last week, America’s Boyfriend Nate Silver made meta-analysis cool beyond belief. The image of a lone geek, sitting at the end of a vast pipeline of data, and turning it into something everyone wants to hear about, is a certain breed of scientist’s deepest fantasy. And that’s sort of what happened. Fivethirtyeight‘s predictive model is how meta-analysis is supposed to work. You’ve got a bunch of data from sources that have variable reliability (each new poll has some likelihood of being flawed in one way or another), but when you have a good way of aggregating the data from a ton of polls, you can get very accurate predictions, in much the same way that taking the average guess about the number of jelly beans in a jar reveals the wisdom of crowds.

It’s true the sheer volume of statistical information available in an election cycle supports a lot of pointless precedence statements that serve as color commentary for the horse race coverage of the elections. And it’s always possible, as the Romney campaign claimed, that the polls are systematically biased. And there will always be people willing to make up their own weightings to “unskew” the polls so they show their desired outcome. But the avalanche of data (no not that avalanche) can also be put to use asking interesting questions about demographic shifts, and broad changes in national or regional demographics, opinions, and attitudes. Cognitive neuroscientists have recently started to get more sophisticated about meta-analysis, too, but we face a long road before we can compete with Nate Silver for the amorous attentions of the nation.

First of all, the questions we ask are very different. (Actually, first of all? Americans are not that into science, although we did get rid of a bunch of these guys last week.) In an election, we know what the functional units are: Ohio has clear state boundaries, and whoever wins Ohio gets all of its electoral votes. In cognitive neuroscience, we might want to know, for example, what spatial patterns are more strongly associated with identifying faces than with identifying objects. Answering this from meta-analysis is a bit like trying to find the borders of Ohio based on its polling data. To make it worse, we would be stuck doing this by plotting historical results as circles with a 25-mile radius, since even our best meta-analytic tools assume that activations are spherical, for convenience. No one believes that functional regions in the brain take the form of spheres, but because data are reported as points in a standardized three-dimensional space, and because it is understood that the activity measured in fMRI is spatially correlated, and because we need some consistent way of making guesses about functional regions or else we can’t code the software to do it, both brainmap and neurosynth, use the points-and-spheres approach as a simplifying assumption.

This is a technical problem, but there are lots of smart people working on these, and in fact, there’s a lot of archived raw data, e.g., here and here, waiting for someone to develop better techniques for mining it. Another flavor of technical problem that distinguishes questions in cognitive neuroscience from polling in a two-man race, is the fact that our data points are about a lot of different kinds of things, so we need a way of knowing that we’ve found the relevant reports to meta-analyze. One way to do this is to pick them by hand. This is OK if you’re working on a topic where there are only six or seven studies, and you want to do the meta-analysis to guide some exploratory or confirmatory analyses of a new data set (at least I hope that’s OK, since that’s what we did in this paper). But if you want to use meta-analysis to say something more solid, you probably want a better way to do pick studies. And in fact, people have been developing ontologies or taxonomies that attempt to organize the available information about the different experiments all these data come from, what tasks were used, what the stimuli were like, etc.

These ontologies involve a number of judgment calls (when created by hand) or create counterintuitive “nodes” (when created algorithmically). For example, the online tool for neurosynth lets you see maps from an automatically generated database of terms and activation locations. My favorite example of computer language processing run amok is the map of “Morris” which, because a guy named Morris developed a way of testing spatial learning in rodents, gives you a nice, juicy hippocampal activation.

Improving this process is an active area of research, and one can do some awesome stuff, even given the current state of the art. For example, Laird et al. took data from about 2000 experiments, and pulled out a set of coherent patterns of activity, which they were then able to relate to cognitive tasks. Their results were comforting evidence that cognitive neuroscience is actually adding up to something. The identified networks, and their associated functions, are like an encyclopedia of the last twenty years of research — a bit over-generalized and lacking in detail, but overall a fair estimate of what we know in compact form.

A closer look at these data, however, demonstrates a problem with meta-analysis that is generally inescapable: ontologies of cognitive tasks are anthropological facts about cognitive neuroscientists, and the conditions under which we observe the brain. For example, they found separate clusters for “language” and “speech,” with speech showing up as more similar to “music.” What’s going on there is that, although in the real world, when we are perceiving speech, it often comes in the form of words — sometimes even strung together into sentences — fMRI experiments about speech perception tend to involve people listening to streams of meaningless syllables, which are then the basis for some kind of meta-linguistic decision (“was that a /da/ or a /ga/?”), or else compared to a baseline of even more meaningless non-speech sounds. These tasks, in turn, are very similar to the tasks that are used to study music.

Some quick poking around with the neurosynth website shows that the brain activity for “speech perception” is indeed quite a bit more similar to music, than to words or sentences. So the differences in task ontology have found their way onto the brain itself. This is because meta-analysis can’t help but reflect the systematic biases in the way experiments are designed. Consider: if every time we presented faces in the MR, we also tickled the participants’ toes with a feather, we would have no way of knowing that the superior postcentral gyrus (i.e., “the toe area”) is not typically involved in face processing.

This means that even once the field has pushed the technical boundaries to their limit, and we have meta-analysis tools that can do for cognitive neuroscience what Fivethirtyeight has done for electoral politics, we will not be able to abandon experimentation and pursue cognitive neuroscience entirely by sitting in coffee shops with headphones on, coming up with new hypotheses and testing them against the mountain of existing data. It will be especially important to come up with experiments that challenge the assumptions that underlie the existing corpus of research, if we are going to figure out what we’ve been missing.

Image credit: Population cartogram from Mark Newman psuedocolored for proportion of Democratic (blue) and Republican (red) presidential votes last week.

Laird AR, Fox PM, Eickhoff SB, Turner JA, Ray KL, McKay DR, Glahn DC, Beckmann CF, Smith SM, & Fox PT (2011). Behavioral interpretations of intrinsic connectivity networks. Journal of cognitive neuroscience, 23 (12), 4022-37 PMID: 21671731

Yarkoni, T., Poldrack, R., Nichols, T., Van Essen, D., & Wager, T. (2011). Large-scale automated synthesis of human functional neuroimaging data Nature Methods, 8 (8), 665-670 DOI: 10.1038/nmeth.1635

2 Comments

  1. Posted July 19, 2014 at 4:10 AM | Permalink

    What’s up friends, its great article about cultureand completely explained, keep it up all the time.

    Here is my web site … comptoir quartz Quebec

  2. Posted July 26, 2014 at 3:17 AM | Permalink

    Undeniably imagine tat that you stated. Yoour favcourite reason appeared to bee at the net the easiest thing to remember of.

    I say to you,I defiinitely get irked whilst folks consider issues that they plainly don’t realize about.
    You managed to hit the nail upon the highest and defined out thhe entire thing with no need side-effects , people
    can take a signal. Will probably be again to get more.
    Thanks

One Trackback

  1. [...] THE MAGNET IS ALWAYS ON ⟨ If America’s Boyfriend were a Cognitive Neuroscientist… [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Follow

Get every new post delivered to your Inbox

Join other followers: