scholarly journals Decoding of Envelope vs. Fundamental Frequency During Complex Auditory Stream Segregation

2020 ◽  
Vol 1 (3) ◽  
pp. 268-287
Author(s):  
Keelin M. Greenlaw ◽  
Sebastian Puschmann ◽  
Emily B. J. Coffey

Hearing-in-noise perception is a challenging task that is critical to human function, but how the brain accomplishes it is not well understood. A candidate mechanism proposes that the neural representation of an attended auditory stream is enhanced relative to background sound via a combination of bottom-up and top-down mechanisms. To date, few studies have compared neural representation and its task-related enhancement across frequency bands that carry different auditory information, such as a sound’s amplitude envelope (i.e., syllabic rate or rhythm; 1–9 Hz), and the fundamental frequency of periodic stimuli (i.e., pitch; >40 Hz). Furthermore, hearing-in-noise in the real world is frequently both messier and richer than the majority of tasks used in its study. In the present study, we use continuous sound excerpts that simultaneously offer predictive, visual, and spatial cues to help listeners separate the target from four acoustically similar simultaneously presented sound streams. We show that while both lower and higher frequency information about the entire sound stream is represented in the brain’s response, the to-be-attended sound stream is strongly enhanced only in the slower, lower frequency sound representations. These results are consistent with the hypothesis that attended sound representations are strengthened progressively at higher level, later processing stages, and that the interaction of multiple brain systems can aid in this process. Our findings contribute to our understanding of auditory stream separation in difficult, naturalistic listening conditions and demonstrate that pitch and envelope information can be decoded from single-channel EEG data.

1976 ◽  
Vol 42 (3_suppl) ◽  
pp. 1071-1074 ◽  
Author(s):  
Betty Tuller ◽  
James R. Lackner

Primary auditory stream segregation, the perceptual segregation of acoustically related elements within a continuous auditory sequence into distinct spatial streams, prevents subjects from resolving the relative constituent order of repeated sequences of tones (Bregman & Campbell, 1971) or repeated sequences of consonant and vowel sounds (Lackner & Goldstein, 1974). To determine why primary auditory stream segregation does not interfere with the resolution of natural speech, 8 subjects were required to indicate the degree of stream segregation undergone by 24 repeated sequences of English monosyllables which varied in terms of the degrees of syntactic and intonational structure present. All sequences underwent primary auditory stream segregation to some extent but the amount of apparent spatial separation was less when syntactic and intonational structure was present.


Author(s):  
Neha Banerjee ◽  
Prashanth Prabhu

Background and Aim: The central auditory nervous system has the ability to perceptually group similar sounds and segregates different sounds called auditory stream segregation or auditory streaming or auditory scene analysis. Identification of a change in spectral profile when the amplitude of a component of complex tone is changed is referred to as Spectral profile analysis. It serves as an important cue in auditory stream segregation as the spectra of the sound source vary. The aim of the study was to assess auditory stream segregation in individuals with cochlear pathology (CP) and auditory neuropathy spectrum disorder. Methods: In the present study, three groups of participants were included. Experimental groups included 21 ears in each group with cochlear hearing loss or auditory neuropathy spectrum disorders (ANSD) and control group with 21 ears with normal hearing. Profile analysis was asse­ssed using "mlp" toolbox, which implements a maximum likelihood procedure in MATLAB. It was assessed at four frequencies (250 Hz, 500 Hz, 750 Hz, and 1000 Hz) for all three groups. Results: The results of the study indicate that the profile analysis threshold (at all four frequ­encies) was significantly poorer for individuals with CP or ANSD compared to the control group. Although, cochlear pathology group performed better than ANSD group. Conclusion: This could be because of poor spec­tral and temporal processing due to loss of outer hair cells at the level of the basilar membrane in cochlear pathology patients and due to the demyelination of auditory neurons in individuals with ANSD. Keywords: Auditory stream segregation; auditory scene analysis; spectral profiling; spectral profile analysis; cochlear pathology; auditory neuropathy spectrum disorders


Sign in / Sign up

Export Citation Format

Share Document