Temporal Coherence Principle in Scene Analysis

Author(s):  
Shihab Shamma ◽  
Mounya Elhilali
2011 ◽  
Vol 34 (3) ◽  
pp. 114-123 ◽  
Author(s):  
Shihab A. Shamma ◽  
Mounya Elhilali ◽  
Christophe Micheyl

2005 ◽  
Vol 16 (2-3) ◽  
pp. 223-238 ◽  
Author(s):  
Joerg Hipp ◽  
Wolfgang Einhäuser ◽  
Jörg Conradt ◽  
Peter König

2011 ◽  
Vol 181 (16) ◽  
pp. 3284-3307 ◽  
Author(s):  
YaPing Huang ◽  
JiaLi Zhao ◽  
YunHui Liu ◽  
SiWei Luo ◽  
Qi Zou ◽  
...  

2021 ◽  
Author(s):  
Vibha Viswanathan ◽  
Barbara G. Shinn-Cunningham ◽  
Michael G. Heinz

AbstractTemporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e., the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We then explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. We tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions, and whether predicted performance was improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial for rigorously testing models of speech perception; however, to the best of our knowledge, they have not been utilized in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure (TFS) cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise, and that physiological computations that exist early along the auditory pathway may contribute to this process.


2005 ◽  
Vol 93 (1) ◽  
pp. 79-90 ◽  
Author(s):  
Wolfgang Einhäuser ◽  
Jörg Hipp ◽  
Julian Eggert ◽  
Edgar Körner ◽  
Peter König

Author(s):  
Max T. Otten ◽  
Wim M.J. Coene

High-resolution imaging with a LaB6 instrument is limited by the spatial and temporal coherence, with little contrast remaining beyond the point resolution. A Field Emission Gun (FEG) reduces the incidence angle by a factor 5 to 10 and the energy spread by 2 to 3. Since the incidence angle is the dominant limitation for LaB6 the FEG provides a major improvement in contrast transfer, reducing the information limit to roughly one half of the point resolution. The strong improvement, predicted from high-resolution theory, can be seen readily in diffractograms (Fig. 1) and high-resolution images (Fig. 2). Even if the information in the image is limited deliberately to the point resolution by using an objective aperture, the improved contrast transfer close to the point resolution (Fig. 1) is already worthwhile.


Sign in / Sign up

Export Citation Format

Share Document