Vision feels effortless. But, under the hood, billions of neurons burn a lot of energy to give us this instant sense of our surroundings. How they accomplish this is still a computational mystery.
The eyes are a window to the soul, poets tell us. And for Nikolaus Kriegeskorte, PhD, our sense of vision is a window into how our brains compute.
“When we open our eyes, we have an immediate sense of the scene we’re in, the objects around us and how they might help us accomplish our goals,” says Dr. Kriegeskorte, a neuroscientist at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “Vision feels effortless. But, under the hood, billions of neurons burn a lot of energy to give us this instant sense of our surroundings. How they accomplish this is still a computational mystery.”
To solve this computational mystery, Dr. Kriegeskorte is seeking help from new machines inspired by the brain: algorithms called deep neural network models. A form of artificial intelligence (AI), neural network models are composed of many small computing elements: highly simplified, artificial ‘neurons’ that pass information from one layer in a hierarchy to the next. Their deep hierarchy mirrors the brain’s own organization in the visual system, with layers corresponding to areas whose neurons represent and interpret the image at ever higher levels of abstraction.
Although simpler than a living brain, the systems built by Dr. Kriegeskorte and his team can mimic biology with striking accuracy. Like biological neurons in the primary visual cortex, the first cortical stage of visual representation, some nodes in the researchers’ neural network models respond to simple features in an image, such as vertical or horizontal edges. Other nodes higher up the hierarchy combine these responses to detect more complex shapes, just as biological neurons do in higher-level visual areas of the brain.
To test these models, the researchers conduct experiments with people in the lab. A person might be asked to perform tasks, such as recognizing a specific type of scene or object while their brain activity is being measured with an MRI scanner. If a computer model behaves similarly to the person — making the same correct inferences and the same mistakes — the model passes the first test.
Dr. Kriegeskorte then studies how well the model predicts the brain-activity patterns measured in humans. “Our models must be able to perform the task — to fail and succeed — similarly,” he says, “and predict the brain-activity patterns we measure.” He shows the same images and movies to both the human participants and the neural network models and compares their internal representations of those scenes.
As computers and algorithms have become more advanced in recent years, so too have the scientists’ models. At first, they could reproduce only the primitive behavior of the brain’s primary visual cortex. But now the models can simulate the activity of higher areas of the visual system, like the inferior temporal cortex, which enables us to recognize what we see. AI models, including deep neural networks, are also conquering other feats of intelligence, such as reasoning, language processing and motor control. “This opens up a whole new field, what we are calling cognitive computational neuroscience,” says Dr. Kriegeskorte, who is also professor of neuroscience and the director of cognitive imaging at Columbia University.
In addition to understanding biological vision, Dr. Kriegeskorte wants to further improve AI models using the biological insights gained by his Zuckerman Institute colleagues. Neurons in the brain pass information not only up the hierarchy, but also laterally within each cortical area, as well as down the hierarchy. Incorporating the way information flows in the brain could endow neural networks with more sophisticated powers, like the ability to base inferences not just on the current sensory information but also on past inferences stored in memory.
“Vision is not a one-way street from image to insight. We bring complex prior knowledge about the world into our interpretation of the signals received by the retina,” says Dr. Kriegeskorte. “For example, we can tell what an object looks like even if most of it is obscured or hidden. We can infer what’s about to happen and what implications it has for the future.”
Columbia’s Zuckerman Institute helps to build the bridges that further this work, says Dr. Kriegeskorte, who joined the institute in 2017. “All the levels of neuroscience are present here,” he says. On the one hand, there are researchers focused on the scale of individual neurons or groups of neurons. On the other, there are people looking at a much larger scale: the intelligent behavior of animals and humans. There are experimentalists, observing and measuring brain and behavior, and there are theorists, modeling what the brain does with math. “We have one foot in experiment and the other in theory,” he says of his lab, “and we’re trying to better link the two.”
Linking experiment to theory across spatial scales also motivated Dr. Kriegeskorte to cofound a new annual conference on Cognitive Computational Neuroscience. The conference brings together cognitive scientists (who study intelligent behavior and how it can be broken down into simpler components), computational neuroscientists (who study how neurons can implement these elementary components) and AI researchers (who study how intelligent behavior can be engineered).
“I think the pace of brain science is speeding up,” he says. “These separate communities are coming together to build a common vision.”