auditory stream segregation
Recently Published Documents


TOTAL DOCUMENTS

170
(FIVE YEARS 20)

H-INDEX

34
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Sarah Anne Sauvé ◽  
Jeremy Marozeau ◽  
Benjamin Zendel

Auditory stream segregation, or separating sounds into their respective sources, and tracking them over time is a fundamental auditory ability. Previous research has separately explored the impacts of aging and musicianship on the ability to separate and follow auditory streams. The current study evaluated the simultaneous effects of age and musicianship on auditory streaming induced by three physical features: intensity, spectral envelope and temporal envelope. In the first study, older and younger musicians and non-musicians with normal hearing identified deviants in a four-note melody interleaved with distractors that were more or less similar to the melody in terms of intensity, spectral envelope and temporal envelope. In the second study, older and younger musicians and non-musicians participated in a dissimilarity rating paradigm with pairs of melodies that differed along the same three features. Results suggested that auditory streaming skills are maintained in older adults but that older adults rely on intensity more than younger adults while musicianship is associated with increased sensitivity to spectral and temporal envelope, acoustic features that are typically less effective for stream segregation, particularly in older adults.


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 1271
Author(s):  
Arivudainambi Pitchaimuthu ◽  
Eshwari Ananth ◽  
Jayashree S Bhat ◽  
Somashekara Haralakatta Shivananjappa

Background: Children with reading disabilities (RD) exhibit difficulty in perceiving speech in background noise due to poor auditory stream segregation. There is a dearth of literature on measures of temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities to assess auditory stream segregation in children with reading disabilities. Hence the present study compared temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities between children with and without reading deficits. Method: The present research consisted of a total number of 30 participants, 15 children with reading disabilities (RD) and fifteen typically developing (TD) children within the age range of 7-14 years and were designated as Group 1 and Group 2 respectively. Both groups were matched for age, grade, and classroom curricular instructions. The groups were evaluated for TFS and concurrent vowel perception abilities and the performance was compared using independent ‘t’ test and repeated measure ANOVA respectively. Results: Results revealed that the children with RD performed significantly (p < 0.001) poorer than TD children on both TFS and concurrent vowel identification task. On concurrent vowel identification tasks, there was no significant interaction found between reading ability and F0 difference suggesting that the trend was similar in both the groups. Conclusion: The study concludes that the children with RD show poor temporal fine structure sensitivity and concurrent vowel identification scores compared to age and grade matched TD children owing to poor auditory stream segregation in children with RD.


2021 ◽  
Author(s):  
Christian Brodbeck ◽  
Jonathan Z. Simon

AbstractVoice pitch carries linguistic as well as non-linguistic information. Previous studies have described cortical tracking of voice pitch in clean speech, with responses reflecting both pitch strength and pitch value. However, pitch is also a powerful cue for auditory stream segregation, especially when competing streams have pitch differing in fundamental frequency, as is the case when multiple speakers talk simultaneously. We therefore investigated how cortical speech pitch tracking is affected in the presence of a second, task-irrelevant speaker. We analyzed human magnetoencephalography (MEG) responses to continuous narrative speech, presented either as a single talker in a quiet background, or as a two-talker mixture of a male and a female speaker. In clean speech, voice pitch was associated with a right-dominant response, peaking at a latency of around 100 ms, consistent with previous EEG and ECoG results. The response tracked both the presence of pitch as well as the relative value of the speaker’s fundamental frequency. In the two-talker mixture, pitch of the attended speaker was tracked bilaterally, regardless of whether or not there was simultaneously present pitch in the speech of the irrelevant speaker. Pitch tracking for the irrelevant speaker was reduced: only the right hemisphere still significantly tracked pitch of the unattended speaker, and only during intervals in which no pitch was present in the attended talker’s speech. Taken together, these results suggest that pitch-based segregation of multiple speakers, at least as measured by macroscopic cortical tracking, is not entirely automatic but strongly dependent on selective attention.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lars Hausfeld ◽  
Niels R. Disbergen ◽  
Giancarlo Valente ◽  
Robert J. Zatorre ◽  
Elia Formisano

Numerous neuroimaging studies demonstrated that the auditory cortex tracks ongoing speech and that, in multi-speaker environments, tracking of the attended speaker is enhanced compared to the other irrelevant speakers. In contrast to speech, multi-instrument music can be appreciated by attending not only on its individual entities (i.e., segregation) but also on multiple instruments simultaneously (i.e., integration). We investigated the neural correlates of these two modes of music listening using electroencephalography (EEG) and sound envelope tracking. To this end, we presented uniquely composed music pieces played by two instruments, a bassoon and a cello, in combination with a previously validated music auditory scene analysis behavioral paradigm (Disbergen et al., 2018). Similar to results obtained through selective listening tasks for speech, relevant instruments could be reconstructed better than irrelevant ones during the segregation task. A delay-specific analysis showed higher reconstruction for the relevant instrument during a middle-latency window for both the bassoon and cello and during a late window for the bassoon. During the integration task, we did not observe significant attentional modulation when reconstructing the overall music envelope. Subsequent analyses indicated that this null result might be due to the heterogeneous strategies listeners employ during the integration task. Overall, our results suggest that subsequent to a common processing stage, top-down modulations consistently enhance the relevant instrument’s representation during an instrument segregation task, whereas such an enhancement is not observed during an instrument integration task. These findings extend previous results from speech tracking to the tracking of multi-instrument music and, furthermore, inform current theories on polyphonic music perception.


Author(s):  
Neha Banerjee ◽  
Prashanth Prabhu

Background and Aim: The central auditory nervous system has the ability to perceptually group similar sounds and segregates different sounds called auditory stream segregation or auditory streaming or auditory scene analysis. Identification of a change in spectral profile when the amplitude of a component of complex tone is changed is referred to as Spectral profile analysis. It serves as an important cue in auditory stream segregation as the spectra of the sound source vary. The aim of the study was to assess auditory stream segregation in individuals with cochlear pathology (CP) and auditory neuropathy spectrum disorder. Methods: In the present study, three groups of participants were included. Experimental groups included 21 ears in each group with cochlear hearing loss or auditory neuropathy spectrum disorders (ANSD) and control group with 21 ears with normal hearing. Profile analysis was asse­ssed using "mlp" toolbox, which implements a maximum likelihood procedure in MATLAB. It was assessed at four frequencies (250 Hz, 500 Hz, 750 Hz, and 1000 Hz) for all three groups. Results: The results of the study indicate that the profile analysis threshold (at all four frequ­encies) was significantly poorer for individuals with CP or ANSD compared to the control group. Although, cochlear pathology group performed better than ANSD group. Conclusion: This could be because of poor spec­tral and temporal processing due to loss of outer hair cells at the level of the basilar membrane in cochlear pathology patients and due to the demyelination of auditory neurons in individuals with ANSD. Keywords: Auditory stream segregation; auditory scene analysis; spectral profiling; spectral profile analysis; cochlear pathology; auditory neuropathy spectrum disorders


2021 ◽  
pp. 1-11
Author(s):  
Saransh Jain ◽  
Riya Cherian ◽  
Nuggehalli P. Nataraja ◽  
Vijay Kumar Narne

Purpose Around 80%–93% of the individuals with tinnitus have hearing loss. Researchers have found that tinnitus pitch was related to the frequencies of hearing loss, but unclear about the relationship between tinnitus pitch and audiometry edge frequency. The comorbidity of tinnitus and speech perception in noise problems had also been reported, but the relationship between tinnitus pitch and speech perception in noise had seldom been investigated. This study was designed to estimate the relationship between tinnitus pitch, audiogram edge frequency, and speech perception in noise. The speech perception in noise was measured using auditory stream segregation paradigm. Method Thirteen individuals with bilateral mild-to-severe tonal tinnitus and minimal-to-mild cochlear hearing loss were selected. Thirteen individuals with hearing loss without tinnitus were also selected. The audiogram of each participant with tinnitus was matched with that of the participant without tinnitus. Tinnitus pitch of the participants with tinnitus was measured and compared with audiogram edge frequency. The stream segregation thresholds were calculated at the participants' admitted tinnitus pitch and one octave below the tinnitus pitch. The stream segregation thresholds were estimated at fission and fusion boundary using pure-tone stimuli in ABA paradigm. Results High correlation between tinnitus pitch and audiogram edge frequency was noted. Overall stream segregation thresholds were higher for individuals with tinnitus. Higher thresholds indicated poorer stream segregation abilities. Within tinnitus participants, the thresholds were significantly lesser at frequency corresponding to admitted tinnitus pitch than at one octave below the tinnitus pitch. Conclusions The information from this study may be helpful in educating the patients about the relationship between hearing loss and tinnitus. The findings may also account for speech-perception-in-noise difficulties often reported by the individuals with tinnitus.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 1271
Author(s):  
Arivudainambi Pitchaimuthu ◽  
Eshwari Ananth ◽  
Jayashree S Bhat ◽  
Somashekara Haralakatta Shivananjappa

Background: Children with reading deficits (RD) exhibit difficulty in perceiving speech in background noise due to poor auditory stream segregation. There is a dearth of literature on measures of temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities to assess auditory stream segregation in children with reading deficits. Hence the present study compared temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities between children with and without reading deficits. Method: The present research consisted of a total number of 30 participants, 15 children with reading deficits (RD) and fifteen typically developing (TD) children within the age range of 7-14 years and were designated as Group 1 and Group 2 respectively. Both groups were matched for age, grade, and classroom curricular instructions. The groups were evaluated for TFS and concurrent vowel perception abilities and the performance was compared using independent ‘t’ test and repeated measure ANOVA respectively. Results: Results revealed that the children with RD performed significantly (p < 0.001) poorer than TD children on both TFS and concurrent vowel identification task. On concurrent vowel identification tasks, there was no significant interaction found between reading ability and F0 difference suggesting that the trend was similar in both the groups. Conclusion: The study concludes that the children with RD show poor temporal fine structure sensitivity and concurrent vowel identification scores compared to age and grade matched TD children owing to poor auditory stream segregation in children with RD.


Cortex ◽  
2020 ◽  
Vol 130 ◽  
pp. 387-400 ◽  
Author(s):  
Brigitta Tóth ◽  
Ferenc Honbolygó ◽  
Orsolya Szalárdy ◽  
Gábor Orosz ◽  
Dávid Farkas ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document