The stereotax and the tachistiscope (Reductionism, part II)

Written by . Filed under shop talk. Tagged , , , . Bookmark the Permalink. Post a Comment. Leave a Trackback URL.

There is something pleasantly upside-down about the fact that I have spent the first part of this summer flying around the world by invitation to various scientific conferences undermining the methodological assumptions I was so keen to defend before my career had even begun. The main point of my “road talk” this summer is that our approach to understanding cognitive functions of the brain is overly constrained by two historical influences on our practice as scientists.

The first is the tradition in cognitive neuroscience of trying to characterize the function of brain regions in terms of their “best stimulus.” This evolved from foundational studies in visual neuroscience that form the basis of much of what we know about perceptual maps — probably still our best guess about how our brains represent the world around them.

Traditionally, such studies involve an anesthetized animal, a set of penetrating electrodes lodged in some cells somewhere in their visual cortex, and a screen on which precisely calibrated visual images can be displayed. In Hubel and Wiesel‘s classic work, these were most often little line segments, presented at a particular orientation, in particular parts of the cat’s field of view.

If you do this patiently, and for a long time, moving from cell to cell, you will eventually identify a cell or population of cells that has a strong preference for particular combinations of orientation and location. That is, the electrical activity in the cell is strongest, under these conditions, when particular visual features are present in the stimulus array.

There is a lot to unpack here, from the patience required to identify such cells (it turns out that they are actually a minority in primary visual cortex) to the notion of what ought, properly to constitute a visual “feature” or whether that concept does us any good outside of the context of these experiments and maybe robot vision. Suffice it to say that, like most elegant, beautiful, and deeply important insights about biology, it is probably a mistake to try and generalize too much from it.

Nonetheless, impressive progress has been made in learning about the human brain by adapting this approach to non-invasive brain imaging using technologies like fMRI. Surely there is something significant about the fact that we can replicably identify brain regions that appear selective to things like faces, written words, and visual scenes that establish our location in space, even if vital and important arguments remain about the normal functions of these regions — what they actually do.

We have to remember that before brain imaging was available, we could equally well have imagined that no such findings would be possible, that in fact our ability to identify and distinguish among even these broad classes of visual inputs is so broadly distributed throughout the brain and so idiosyncratic across individuals as to doom any test to identify a “face area” as a fool’s errand. Cognitive neuroscience exists in its current form because brain activity is selective enough, under particular circumstances, that the possible function of a “face area” (or, in the case of the work that I have focused on, a “visual word form area”) is something we can argue intelligently about. In this way it is clear that reductionism is at least productive.

One major problem, however, is that this stimulus selectivity is context-bound, and depends to a great extent on the goals of the organism as well as the complexity of the stimulus array. We shouldn’t be surprised, perhaps, that an anesthetized cat fixed to a stereotax is in some ways a poor model for an undergraduate volunteer in an fMRI scanner. In fact the problem is more general than even this glib remark suggests.

As it happens, the kinds of cells that have the response properties classically identified by Hubel and Wiesel are a minority of the cells in V1. Further, these response properties change pretty dramatically under more naturalistic viewing conditions. The response properties of cells whose activity can be characterized as “simple” under conditions where a single bar is presented in a single location against a blank background turn out to be a poor predictor of how that same cell will respond when presented with natural scenes.

This brings me to the second historical influence on cognitive neuroscience: The tachistoscope. This was a device that allowed millisecond-accurate control over the presentation of stimuli (using a system of rotating discs and mechanical windows) and responses (which were collected with a telegraph key). You will sometimes still see tachistoscopic used as an adjective to describe experiments in which a computer screen and keyboard (and surprisingly complicated software) are used to achieve the same effect.

Generations of psychologists and cognitive scientists have relied on these methods in order to channel the infinite variety of mental activity into some sort of observable response to a discrete, quantifiable — or at least definable — stimulus. Most often, this has meant asking people to push one of two buttons at the end of a carefully constructed artificial “trial” in which some stimulus is presented. Alternatively, we often ask people to make a verbal response, and measure how quickly and accurately they can, for example, name the color of the “ink” when presented with a stimulus like this:

BLUE

This has created a scientific tradition in which, in order to measure behavior, it had to be broken into discrete episodes, following a rhythmic volley between stimulus and response, and providing data only at pre-determined points during the experiment. As with the tradition of identifying regions by “best stimulus,” there is really a lot to be said in favor of this approach. For starters, practically all scientific work on human cognition is based on it.

On the other hand, it has worked to narrow the focus of much research to encompass only those questions that can be answered in terms of these simple, unidimensional outputs.

There was concern about the impact this would have on cognitive neuroscience early on:

I wonder whether PET research so far has taken the methods of experimental psychology too seriously. In standard psychology we need to have the subject do some task with an externalizable yes-or-no answer so that we have some reaction times and error rates to analyze – those are our only data … I suspect that when you have people do some artificial task and look at their brains, the strongest activity you’ll see is in the parts of the brain that are responsible for doing artificial tasks.

– Steven Pinker, interview in the
Journal of Cognitive Neuroscience, 1994

Now, there do turn out to be a set of areas that are commonly associated with “doing artificial laboratory tasks,” as well as a “default network” of regions that are reliably identified as less active when people are engaged in laboratory tasks then when they have no specific instructions and are simply lying in the scanner not doing anything in particular. At the same time, carefully designed studies can reveal much more than these regions at work, and any neuroimager with a little bit of experience could distinguish, just by looking at images of the brain at work, between patterns of activity related to, say, making decisions about written words and navigating a virtual maze.

Nonetheless, it is becoming increasingly clear that the kinds of tasks we ask people to do while we measure their brain activity can interact in extraordinary ways with the stimuli we ask them to process. These interactions have puzzling consequences for a description of the functions of brain regions in terms of their “best stimulus.” For example, in our own work, we have found that the “visual word form area” — so called because it can be identified as responding more strongly to words than to a variety of control stimuli under passive viewing conditions — evinces a wide variety of stimulus selectivity patterns under different task demands.

In one striking example it had just the opposite selectivity of what we would have predicted. Because I have a predilection for titles that are simple declarative sentences, we called the paper “Left fusiform BOLD responses are inversely related to word-likeness in a one-back task.” There’s some jargon here that’s worth unpacking: BOLD responses are just the signature of metabolic activity we observe using fMRI — they are obliquely related to blood flow, which is obliquely related to neural activity, making it sort of a miracle that fMRI works in the first place. The fusiform is an anatomical structure on on the underside of the brain, about a third of the way forward, where the occipital lobe meets the temporal lobe, a small segment of which is famous for responding more strongly to words than other kinds of control stimuli. So, the fact that we found responses that were strongest for the least word-like stimuli in our experiment, and weakest for actual words was a bit of a surprise. We attributed this to the task we used, a “one back” task in which participants had to monitor for stimuli that repeated from the previous trial.

That is, the less word-like a stimulus is, the more participants have to depend on purely visual information to hold it in memory, instead of other properties that are associated with words and word-like stimuli, such as their pronunciation or meaning. And indeed, we were able to show negative correlations between the “visual word form area” and regions associated with processing these other stimulus dimensions. Surely that can’t be what’s going on when people are actually reading? In order to isolate the function of a region, we had contrived a situation in which its function came untethered from its usual context. But of course it is just this context — what happens when people are actually reading — that we wish to understand. Like the simple cells that can be characterized by presenting tiny line segments to anesthetized cats, it is clear that responding more strongly to words than other stimuli is one characteristic of the left fusiform. But that can’t be a complete characterization of how the region functions. For that we need ways to study brain activity under more naturalistic conditions.

Luckily, techniques for doing just this are beginning to come online…

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Follow

Get every new post delivered to your Inbox

Join other followers: