scholarly journals Speech categorization reveals the role of early-stage temporal-coherence processing in auditory scene analysis

2021 ◽  
Author(s):  
Vibha Viswanathan ◽  
Barbara G. Shinn-Cunningham ◽  
Michael G. Heinz

AbstractTemporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e., the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We then explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. We tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions, and whether predicted performance was improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial for rigorously testing models of speech perception; however, to the best of our knowledge, they have not been utilized in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure (TFS) cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise, and that physiological computations that exist early along the auditory pathway may contribute to this process.

2016 ◽  
Vol 10 ◽  
Author(s):  
Beáta T. Szabó ◽  
Susan L. Denham ◽  
István Winkler

eLife ◽  
2013 ◽  
Vol 2 ◽  
Author(s):  
Andrew R Dykstra ◽  
Alexander Gutschalk

Using computational models and stimuli that resemble natural acoustic signals, auditory scientists explore how we segregate competing streams of sound.


2011 ◽  
Vol 34 (3) ◽  
pp. 114-123 ◽  
Author(s):  
Shihab A. Shamma ◽  
Mounya Elhilali ◽  
Christophe Micheyl

Sign in / Sign up

Export Citation Format

Share Document