Home Science A Huge New Data Set Pushes the Limits of Neuroscience

A Huge New Data Set Pushes the Limits of Neuroscience

0
A Huge New Data Set Pushes the Limits of Neuroscience

[ad_1]

So neuroscientists use an approach called “dimensionality reduction” to make such visualization possible—they take data from thousands of neurons and, by applying clever techniques from linear algebra, describe their activities using just a few variables. This is just what psychologists did in the 1990s to define their five major domains of human personality: openness, agreeableness, conscientiousness, extroversion, and neuroticism. Just by knowing how an individual scored on those five traits, they found, they could effectively predict how that person would answer hundreds of questions on a personality test.

But the variables extracted from neural data can’t be expressed in a single word like “openness.” They are more like motifs, patterns of activity that span whole neural populations. A few of these motifs can define the axes of a plot, wherein every point represents a different combination of those motifs—its own unique activity profile.

There are downsides to reducing data from thousands of neurons down to just a few variables. Just like taking a 2D image of a 3D cityscape renders some buildings totally invisible, cramming a complex set of neuronal data down into only a few dimensions eliminates a great deal of detail. But working in a few dimensions is much more manageable than examining thousands of individual neurons at once. Scientists can plot evolving activity patterns on the axes defined by the motifs to watch how the neurons’ behavior changes over time. This approach has proven especially fruitful in the motor cortex, a region where confusing, unpredictable single-neuron responses had long flummoxed researchers. Viewed collectively, however, the neurons trace regular, often circular trajectories. Features of these trajectories correlate with particular aspects of movement—their location, for example, is related to speed.

Olsen says he expects that scientists will use dimensionality reduction to extract interpretable patterns from the complex data. “We can’t go neuron by neuron,” he says. “We need statistical tools, machine learning tools, that can help us find structure in big data.”

But this vein of research is still in its early days, and scientists struggle to agree on what the patterns and trajectories mean. “People fight all the time about whether these things are factual,” says John Krakauer, professor of neurology and neuroscience at Johns Hopkins University. “Are they real? Can they be interpreted as easily [as single-neuron responses]? They don’t feel as grounded and concrete.”

Bringing these trajectories down to earth will require developing new analytical tools, says Churchland—a task that will surely be facilitated by the availability of large-scale data sets like the Allen Institute’s. And the unique capacities of the institute, with its deep pockets and huge research staff, will enable it to produce greater masses of data to test those tools. The institute, Olsen says, functions like an astronomical observatory—no single lab could pay for its technologies, but the entire scientific community benefits from, and contributes to, its experimental capabilities.

Currently, he says, the Allen Institute is working on piloting a system where scientists from across the research community can suggest what sorts of stimuli animals should be shown, and what sorts of tasks they should be doing, while thousands of their neurons are being recorded. As recording capacities continue to increase, researchers are working to devise richer and more realistic experimental paradigms, to observe how neurons respond to the sorts of real-world, challenging tasks that push their collective capabilities. “If we really want to understand the brain, we cannot keep just showing oriented bars to the cortex,” Fusi says. “We really need to move on.”

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here