Happy New Year! I let The Magnet quench over the holidays, and am firing it back up with something a little bit different. Over the next few weeks, I’ll be doing a longish series inspired by the “story behind the paper” posts on Tree of Life. The particular paper I’ll be writing about is this one, but it’s going to take me a while to get there…
I had two aha experiences in my first term of grad school that cemented my resolve to be a cognitive neuroscientist. I had decided, somewhat arbitrarily, to join the neuroscience program, rather than the psychology program when I started graduate school. My background was in cognitive psychology, and I’d already spent two years working in a lab on my senior thesis, and generally helping out with studies in which we made very finely tuned manipulations of word lists and measured minute differences in how rapidly people could read the words aloud. It sounds dry, but I was captivated by the questions you could ask by doing this kind of work, and I was anxious to join the ranks of people who did it for a living. Nonetheless, when deciding which program to join, I went with neuroscience, in part because I felt like the coursework would be more of a challenge — I had gone to a school where the undergraduate biology classes were mostly designed to weed out poor candidates for medical school, and had taken very few — and felt like I would get more in the long term from exposure to the study of the brain than I would from the survey courses I would be required to take in psychology.
So, there I was in one of the informal talks first year neuroscience students were required to attend, listening to a presentation about the role of a particular protein in synaptic plasticity. I could follow this. I knew that synapses are the points of communication between neurons, where information is passed via electrical and chemical signals, and that changes in these connections were a form of dynamic change throughout the nervous system, and, most interesting to me, that these changes seem to underlie learning and memory. So even though I was a bit murky on the details, I was keeping up reasonably well until the speaker said something along the lines of “we were interested in the role of this particular protein, so we made a mouse that didn’t express it.”
I didn’t understand this at first. What do you mean “made a mouse?” Like, a mutant? Whatever surprise this evoked was rapidly overwhelmed by a deeper, more intense astonishment that no one else in the room found this particularly interesting or novel. The technology to produce transgenic mice — especially “knockout” mice in which a small bit of genetic information is deleted in order to study how the organism gets along without the protein it codes for — was about ten years old and already more or less routine in experimental biology.
This was my first inkling that I had set foot into a very different culture from what I was used to. The subjects of my own research, at the time, were mostly undergraduates. These were people who had been born and (mostly) grown up and were studying psychology, and would come to our “lab” under their own volition. Our lab consisted of a little room that had been set up with a computer running special software that allowed us to present single words at the center of the screen and record button presses or spoken responses. At the end of our experiments, subjects would go back to their daily lives, perhaps a little bored, but generally none the worse for wear. That day’s speaker was describing research with subjects that were genetically engineered, raised, tested, killed, and then had their brains sliced into thin strips that were kept alive in dishes so that he could observe the effect of a particular bit of genetic material on the electrical responses in their neurons. I was developing a serious case of physics envy.
At the same time, I understood that we were a long way from applying these techniques or anything like them to questions I was really interested in — questions about how the brain gives rise to behavior, and how behavior and feedback from the environment in turn shape the brain. Because the mind is only one of the operations of the brain, just as entertainment is only one of the operations of a circus. Someone has to feed, water, and clean up after the animals, set up and break down the tent, handle the budget, make travel arrangements, make sure the clowns don’t run out of liquor, etc. Of course I had been dimly aware that in its way the brain is just another organ of the body, but it was yet another culture shock for me to discover other communities of hyperspecialists, for whom the mysteries of human experience (or minute portions of it, such as how much longer it takes to read the word PINT than it does to read the word MINT) were as remote from their daily work as if they were studying the kidney or the heart.
My second “aha” experience came in a systems neuroscience course, when the professor described an experiment that involved grafting an extra eye onto a tadpole. An interesting feature of mammalian primary visual cortex is that there is a pattern of “ocular dominance,” that is, greater input from one eye than the other, that, for example, forms a distinctive striped pattern when dye is injected into sections of the thalamus that represent a single eye. The functional significance of these ocular dominance columns is still not entirely clear, but they are very easy to observe with techniques available that were available half a century ago, and were demonstrated very early on to be highly plastic under certain conditions during a critical period early in life.
One early idea about how these columns are formed has to do with competition between incoming neurons that carry information from the two eyes. The idea is based on the fact that the biochemical processes that create and maintain connections between neurons are activity-dependent and time-sensitive. This suggests that a collection of inputs that are all active at the same time — such as the inputs representing a particular part of the retina in one eye — will form very stable, coherent connections. Because the input to the eyes is slightly different, there is more coincident activity among inputs from the same eye than among inputs from different eyes. This was advanced as an explanation for why, during the critical period, one could observe wholesale rearrangement of the visual cortex in response to the patching of one of the eyes. When you eliminate its competition, the eye that continues to get input can form connections willy-nilly, taking over space that had previously been allocated to its (now less active) competitor. But could it explain the formation of ocular dominance columns in the first place?
One way to test this is to do some elective surgery on frogs, who normally have no such alternating representation in their early visual system. In fact, frogs’ eyes are wired up so that the equivalent of primary visual cortex only ever gets input from one eye. So, in order to create competitive dynamics and see if this led to ocular dominance structure, scientists essentially stuck an extra eye onto a tadpole, demonstrated that it innervated the optic tectum along with the normal eye. What they observed was a very similar alternating pattern of connectivity to what is seen in mammalian primary visual cortex. They had demonstrated that competition between the extra eye and its native counterpart was sufficient to induce a pattern of alternating ocular dominance.
This was actually quite an old study even when I first learned of it, but the level of intervention was beyond anything I had contemplated as a psychology student. The invasiveness and control afforded by animal models means that the kinds of questions you can ask in neuroscience are just at a completely different level of description than what is available in much of psychology. Although we can’t go around knocking out genes to see what effects they have in people, or raising children in caves to find out at what age they irreversibly lose the ability to learn language (or whether they would spontaneously speak an ancient tongue as hypothesized by Frederick II or Psamtik I), cognitive neuroscience tries to bridge that gap with non-invasive imaging techniques that show us patterns of brain activity that are related to particular behaviors or states. That seemed pretty awesome, and I wanted in…
Image credit: Lovely infographic of the 3-eyed frog study from a review by Katz and Crowley. The original paper is:
Constantine-Paton, M., & Law, M. (1978). Eye-specific termination bands in tecta of three-eyed frogs Science, 202 (4368), 639-641 DOI: 10.1126/science.309179