This event will be in-person only to Columbia University Affiliates and will not offer a Zoom option
Jennifer Groh, PhD
Professor of Psychology and Neuroscience
Departments of Psychology and Neuroscience, Neurobiology, Computer Science, Biomedical Engineering
Computing the locations of sound(s) in the visual scene
How the auditory system encodes the locations of sounds involves rich computational problems. I will focus on two particular computations: 1. How does the brain compute the visual locations of sounds across eye movements? We recently discovered that the eardrums move when the eyes move, suggesting that a copy of eye movement commands is sent to the ear by the brain, potentially causing sound transduction to be altered by eye movements (Gruters, Murphy et al. PNAS 2018; Lovich et al. biorxiv 2022; Lovich et al Phil Trans B 2023). 2. How does the brain encode more than one sound (or visual) location at a time? I will discuss evidence for neural time-division multiplexing, in which neural activity fluctuates across time to allow representations to encode more than one simultaneous stimulus (Caruso et al, Nat Comm 2018; Jun et al. eLife 2022). These findings all emerged from experimentally testing computational models regarding spatial representations and their transformations within and across sensory pathways. Further, they speak to several general problems confronting modern neuroscience such as the hierarchical organization of brain pathways, selectivity of processing, and limits on perception/cognition.
Host(s): Sarah Woolley (Faculty) and Danique Jeurissen (Associate Research Scientist).
Please contact [email protected] with any questions.
The Columbia Neuroscience Seminar series is a collaborative effort of Columbia's Zuckerman Institute, the Department of Neuroscience, the Doctoral Program in Neurobiology and Behavior and the Columbia Translational Neuroscience Initiative, and with support from the Kavli Institute for Brain Science.