scholarly journals Evaluation of auditory stream segregation in individuals with cochlear pathology and auditory neuropathy spectrum disorder

Author(s):  
Neha Banerjee ◽  
Prashanth Prabhu

Background and Aim: The central auditory nervous system has the ability to perceptually group similar sounds and segregates different sounds called auditory stream segregation or auditory streaming or auditory scene analysis. Identification of a change in spectral profile when the amplitude of a component of complex tone is changed is referred to as Spectral profile analysis. It serves as an important cue in auditory stream segregation as the spectra of the sound source vary. The aim of the study was to assess auditory stream segregation in individuals with cochlear pathology (CP) and auditory neuropathy spectrum disorder. Methods: In the present study, three groups of participants were included. Experimental groups included 21 ears in each group with cochlear hearing loss or auditory neuropathy spectrum disorders (ANSD) and control group with 21 ears with normal hearing. Profile analysis was asse­ssed using "mlp" toolbox, which implements a maximum likelihood procedure in MATLAB. It was assessed at four frequencies (250 Hz, 500 Hz, 750 Hz, and 1000 Hz) for all three groups. Results: The results of the study indicate that the profile analysis threshold (at all four frequ­encies) was significantly poorer for individuals with CP or ANSD compared to the control group. Although, cochlear pathology group performed better than ANSD group. Conclusion: This could be because of poor spec­tral and temporal processing due to loss of outer hair cells at the level of the basilar membrane in cochlear pathology patients and due to the demyelination of auditory neurons in individuals with ANSD. Keywords: Auditory stream segregation; auditory scene analysis; spectral profiling; spectral profile analysis; cochlear pathology; auditory neuropathy spectrum disorders

Author(s):  
Naina Johnson ◽  
Annika Mariam Shiju ◽  
Adya Parmar ◽  
Prashanth Prabhu

Abstract Introduction One of the major cues that help in auditory stream segregation is spectral profiling. Musicians are trained to perceive a fine structural variation in the acoustic stimuli and have enhanced temporal perception and speech perception in noise. Objective To analyze the differences in spectral profile thresholds in musicians and nonmusicians. Methods The spectral profile analysis threshold was compared between 2 groups (musicians and nonmusicians) in the age range between 15 and 30 years old. The stimuli had 5 harmonics, all at the same amplitude (f0 = 330 Hz, mi4). The third (variable tone) has a similar harmonic structure; however, the amplitude of the third harmonic component was higher, producing a different timbre in comparison with the standards. The subject had to identify the odd timbre tone. The testing was performed at 60 dB HL in a sound-treated room. Results The results of the study showed that the profile analysis thresholds were significantly better in musicians compared with nonmusicians. The result of the study also showed that the profile analysis thresholds were better with an increase in the duration of music training. Thus, improved auditory processing in musicians could have resulted in a better profile analysis threshold. Conclusions Auditory stream segregation was found to be better in musicians compared with nonmusicians, and the performance improved with an increase in several years of training. However, further studies are essential on a larger group with more variables for validation of the results.


2006 ◽  
Vol 18 (1) ◽  
pp. 1-13 ◽  
Author(s):  
Joel S. Snyder ◽  
Claude Alain ◽  
Terence W. Picton

A general assumption underlying auditory scene analysis is that the initial grouping of acoustic elements is independent of attention. The effects of attention on auditory stream segregation were investigated by recording event-related potentials (ERPs) while participants either attended to sound stimuli and indicated whether they heard one or two streams or watched a muted movie. The stimuli were pure-tone ABA-patterns that repeated for 10.8 sec with a stimulus onset asynchrony between A and B tones of 100 msec in which the A tone was fixed at 500 Hz, the B tone could be 500, 625, 750, or 1000 Hz, and was a silence. In both listening conditions, an enhancement of the auditory-evoked response (P1-N1-P2 and N1c) to the B tone varied with f and correlated with perception of streaming. The ERP from 150 to 250 msec after the beginning of the repeating ABA-patterns became more positive during the course of the trial and was diminished when participants ignored the tones, consistent with behavioral studies indicating that streaming takes several seconds to build up. The N1c enhancement and the buildup over time were larger at right than left temporal electrodes, suggesting a right-hemisphere dominance for stream segregation. Sources in Heschl's gyrus accounted for the ERP modulations related to f-based segregation and buildup. These findings provide evidence for two cortical mechanisms of streaming: automatic segregation of sounds and attention-dependent buildup process that integrates successive tones within streams over several seconds.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lars Hausfeld ◽  
Niels R. Disbergen ◽  
Giancarlo Valente ◽  
Robert J. Zatorre ◽  
Elia Formisano

Numerous neuroimaging studies demonstrated that the auditory cortex tracks ongoing speech and that, in multi-speaker environments, tracking of the attended speaker is enhanced compared to the other irrelevant speakers. In contrast to speech, multi-instrument music can be appreciated by attending not only on its individual entities (i.e., segregation) but also on multiple instruments simultaneously (i.e., integration). We investigated the neural correlates of these two modes of music listening using electroencephalography (EEG) and sound envelope tracking. To this end, we presented uniquely composed music pieces played by two instruments, a bassoon and a cello, in combination with a previously validated music auditory scene analysis behavioral paradigm (Disbergen et al., 2018). Similar to results obtained through selective listening tasks for speech, relevant instruments could be reconstructed better than irrelevant ones during the segregation task. A delay-specific analysis showed higher reconstruction for the relevant instrument during a middle-latency window for both the bassoon and cello and during a late window for the bassoon. During the integration task, we did not observe significant attentional modulation when reconstructing the overall music envelope. Subsequent analyses indicated that this null result might be due to the heterogeneous strategies listeners employ during the integration task. Overall, our results suggest that subsequent to a common processing stage, top-down modulations consistently enhance the relevant instrument’s representation during an instrument segregation task, whereas such an enhancement is not observed during an instrument integration task. These findings extend previous results from speech tracking to the tracking of multi-instrument music and, furthermore, inform current theories on polyphonic music perception.


1976 ◽  
Vol 42 (3_suppl) ◽  
pp. 1071-1074 ◽  
Author(s):  
Betty Tuller ◽  
James R. Lackner

Primary auditory stream segregation, the perceptual segregation of acoustically related elements within a continuous auditory sequence into distinct spatial streams, prevents subjects from resolving the relative constituent order of repeated sequences of tones (Bregman & Campbell, 1971) or repeated sequences of consonant and vowel sounds (Lackner & Goldstein, 1974). To determine why primary auditory stream segregation does not interfere with the resolution of natural speech, 8 subjects were required to indicate the degree of stream segregation undergone by 24 repeated sequences of English monosyllables which varied in terms of the degrees of syntactic and intonational structure present. All sequences underwent primary auditory stream segregation to some extent but the amount of apparent spatial separation was less when syntactic and intonational structure was present.


Sign in / Sign up

Export Citation Format

Share Document