Technical Program

Paper Detail

Paper: PS-1B.71
Session: Poster Session 1B
Location: H Fl├Ąche 1.OG
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Temporal Context Invariance Reveals Neural Processing Timescales in Human Auditory Cortex
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Sam Norman-Haignere, Laura Long, Columbia University, United States; Orrin Devinsky, Werner Doyle, NYU Langone Medical Center, United States; Guy McKhann, Catherine Schevon, Columbia University Medical Center, United States; Adeen Flinker, NYU Langone Medical Center, United States; Nima Mesgarani, Columbia University, United States
Abstract: Natural stimuli like speech and music are structured at many timescales. But it remains unclear how these diverse timescales are neurally coded. Do neural processing timescales increase along the cortical hierarchy? Are there distinct timescales for particular stimulus categories? What information is coded at each timescale? Answering these questions has been challenging because there is no general method for estimating sensory integration periods: the temporal window within which stimulus features alter the neural response. Here, we introduce a simple experimental paradigm for inferring the integration period of any time-varying response. We present segments of natural stimuli in a sequence, such that same segment occurs in two different contexts (different surrounding segments). We then measure how long the segments need to be for the response to become invariant to the context. We apply this paradigm to map temporal integration periods in human auditory cortex using electrocorticography data from epilepsy patients. Our map reveals a clear gradient in which integration periods grow as one moves away from primary auditory cortex, providing support for hierarchical models. We also show that selectivity for sound categories first emerges at timescales of ~200 ms, approximately the duration of speech syllables and musical notes.