If we can understand mathematically how our brains represent everything we see—whether it’s letters or 3D shapes like dogs—then how we think and reason on top of that will become easier to understand.
Elias Issa is using data from neural experiments to develop computational models that he hopes will help reveal how our brains leverage visual information into intelligent behavior.Read more about Elias B. Issa, PhD >
Elias Issa, PhD, recalls a long afternoon in 2014 that set him onto his current research path. With markers and a whiteboard, he and a fellow postdoc Charles Cadieu—both then at MIT—were improvising with math. They were struggling to develop a set of equations, an algorithm, that could represent the patterns of activity of face-detecting brain cells that the two had been eavesdropping on in animal studies.
Then came the moment.
“After about four hours, the math on the board suddenly worked out nicely,” Dr. Issa recalls. “I saw the power of computational neuroscience: putting math to noisy data from the brain to better understand what is going on.”
A principal investigator at Columbia’s Zuckerman Institute since 2017, Dr. Issa combines laboratory studies of cells in the brain’s visual system with an engineer’s drive to craft crisp mathematical descriptions that faithfully represent the lab data.
With a PhD in biomedical engineering and an admiration since boyhood of inventors like Thomas Edison and George Washington Carver, Dr. Issa also has an eye on what foundational research can make possible, from improved artificial intelligence to neural prosthetics for those with sensory deficits.
His lab listens in on the activity of brain cells to study how visual signals feed into intelligent judgments (such as if a face looks familiar) and complex behaviors (such as navigating a furniture-filled room). His team then puts math to the data, in search of more aha! moments like that one back in 2014 with his fellow post-doc. When he arrives at equations that can use measurable properties of the brain (for example, the firing rates of vision-system neurons) to explain observed laboratory results (for example, changes in those firing rates in response to familiar or novel objects), Dr. Issa knows he’s having one of those moments.
“If we can understand mathematically how our brains represent everything we see—whether it’s letters or 3D shapes like dogs—then how we think and reason on top of that will become easier to understand,” he says.
Dr. Issa’s research is anchored in machine learning. This type of artificial intelligence enables computers to execute vision-like tasks such as identifying cats, trucks and other specific objects in digital images.
“Machine learning provides a rich framework for asking questions about the nature of vision-related computations in the brain,” Dr. Issa says.
By analyzing machine-learning processes, Dr. Issa develops hypotheses about how the brain processes visual information. He then devises computational models, based on those hypotheses, of vision tasks, such as building up the two-dimensional input that reaches our flat retinas into perceptions of three-dimensional settings. Those models, in turn, translate into quantitative predictions about how real neural circuits in animals like us achieve those vision tasks, predictions tested with experimental studies. Get a handle on vision this way, Dr. Issa contends, and you take a step toward cracking the neural codes our brains deploy to understand the world and what we can do in it.
Dr. Issa hopes his research will tease out, with mathematical precision, the complexities by which vision elicits intelligent human behavior. He says his work also could further the technological goal of building human-like visual intuition into artificial intelligence systems. “These are some of the ways that a neuroscientist like me might contribute to society,” Dr. Issa says.