auditory stream
Recently Published Documents


TOTAL DOCUMENTS

200
(FIVE YEARS 20)

H-INDEX

39
(FIVE YEARS 2)

F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 1271
Author(s):  
Arivudainambi Pitchaimuthu ◽  
Eshwari Ananth ◽  
Jayashree S Bhat ◽  
Somashekara Haralakatta Shivananjappa

Background: Children with reading disabilities (RD) exhibit difficulty in perceiving speech in background noise due to poor auditory stream segregation. There is a dearth of literature on measures of temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities to assess auditory stream segregation in children with reading disabilities. Hence the present study compared temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities between children with and without reading deficits. Method: The present research consisted of a total number of 30 participants, 15 children with reading disabilities (RD) and fifteen typically developing (TD) children within the age range of 7-14 years and were designated as Group 1 and Group 2 respectively. Both groups were matched for age, grade, and classroom curricular instructions. The groups were evaluated for TFS and concurrent vowel perception abilities and the performance was compared using independent ‘t’ test and repeated measure ANOVA respectively. Results: Results revealed that the children with RD performed significantly (p < 0.001) poorer than TD children on both TFS and concurrent vowel identification task. On concurrent vowel identification tasks, there was no significant interaction found between reading ability and F0 difference suggesting that the trend was similar in both the groups. Conclusion: The study concludes that the children with RD show poor temporal fine structure sensitivity and concurrent vowel identification scores compared to age and grade matched TD children owing to poor auditory stream segregation in children with RD.


Author(s):  
Neha Banerjee ◽  
Prashanth Prabhu

Background and Aim: The central auditory nervous system has the ability to perceptually group similar sounds and segregates different sounds called auditory stream segregation or auditory streaming or auditory scene analysis. Identification of a change in spectral profile when the amplitude of a component of complex tone is changed is referred to as Spectral profile analysis. It serves as an important cue in auditory stream segregation as the spectra of the sound source vary. The aim of the study was to assess auditory stream segregation in individuals with cochlear pathology (CP) and auditory neuropathy spectrum disorder. Methods: In the present study, three groups of participants were included. Experimental groups included 21 ears in each group with cochlear hearing loss or auditory neuropathy spectrum disorders (ANSD) and control group with 21 ears with normal hearing. Profile analysis was asse­ssed using "mlp" toolbox, which implements a maximum likelihood procedure in MATLAB. It was assessed at four frequencies (250 Hz, 500 Hz, 750 Hz, and 1000 Hz) for all three groups. Results: The results of the study indicate that the profile analysis threshold (at all four frequ­encies) was significantly poorer for individuals with CP or ANSD compared to the control group. Although, cochlear pathology group performed better than ANSD group. Conclusion: This could be because of poor spec­tral and temporal processing due to loss of outer hair cells at the level of the basilar membrane in cochlear pathology patients and due to the demyelination of auditory neurons in individuals with ANSD. Keywords: Auditory stream segregation; auditory scene analysis; spectral profiling; spectral profile analysis; cochlear pathology; auditory neuropathy spectrum disorders


2021 ◽  
pp. 1-11
Author(s):  
Saransh Jain ◽  
Riya Cherian ◽  
Nuggehalli P. Nataraja ◽  
Vijay Kumar Narne

Purpose Around 80%–93% of the individuals with tinnitus have hearing loss. Researchers have found that tinnitus pitch was related to the frequencies of hearing loss, but unclear about the relationship between tinnitus pitch and audiometry edge frequency. The comorbidity of tinnitus and speech perception in noise problems had also been reported, but the relationship between tinnitus pitch and speech perception in noise had seldom been investigated. This study was designed to estimate the relationship between tinnitus pitch, audiogram edge frequency, and speech perception in noise. The speech perception in noise was measured using auditory stream segregation paradigm. Method Thirteen individuals with bilateral mild-to-severe tonal tinnitus and minimal-to-mild cochlear hearing loss were selected. Thirteen individuals with hearing loss without tinnitus were also selected. The audiogram of each participant with tinnitus was matched with that of the participant without tinnitus. Tinnitus pitch of the participants with tinnitus was measured and compared with audiogram edge frequency. The stream segregation thresholds were calculated at the participants' admitted tinnitus pitch and one octave below the tinnitus pitch. The stream segregation thresholds were estimated at fission and fusion boundary using pure-tone stimuli in ABA paradigm. Results High correlation between tinnitus pitch and audiogram edge frequency was noted. Overall stream segregation thresholds were higher for individuals with tinnitus. Higher thresholds indicated poorer stream segregation abilities. Within tinnitus participants, the thresholds were significantly lesser at frequency corresponding to admitted tinnitus pitch than at one octave below the tinnitus pitch. Conclusions The information from this study may be helpful in educating the patients about the relationship between hearing loss and tinnitus. The findings may also account for speech-perception-in-noise difficulties often reported by the individuals with tinnitus.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 1271
Author(s):  
Arivudainambi Pitchaimuthu ◽  
Eshwari Ananth ◽  
Jayashree S Bhat ◽  
Somashekara Haralakatta Shivananjappa

Background: Children with reading deficits (RD) exhibit difficulty in perceiving speech in background noise due to poor auditory stream segregation. There is a dearth of literature on measures of temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities to assess auditory stream segregation in children with reading deficits. Hence the present study compared temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities between children with and without reading deficits. Method: The present research consisted of a total number of 30 participants, 15 children with reading deficits (RD) and fifteen typically developing (TD) children within the age range of 7-14 years and were designated as Group 1 and Group 2 respectively. Both groups were matched for age, grade, and classroom curricular instructions. The groups were evaluated for TFS and concurrent vowel perception abilities and the performance was compared using independent ‘t’ test and repeated measure ANOVA respectively. Results: Results revealed that the children with RD performed significantly (p < 0.001) poorer than TD children on both TFS and concurrent vowel identification task. On concurrent vowel identification tasks, there was no significant interaction found between reading ability and F0 difference suggesting that the trend was similar in both the groups. Conclusion: The study concludes that the children with RD show poor temporal fine structure sensitivity and concurrent vowel identification scores compared to age and grade matched TD children owing to poor auditory stream segregation in children with RD.


2020 ◽  
Vol 148 (4) ◽  
pp. 2507-2507
Author(s):  
Sung-Joo Lim ◽  
Barbara Shinn-Cunningham ◽  
Tyler K. Perrachione

Cortex ◽  
2020 ◽  
Vol 130 ◽  
pp. 387-400 ◽  
Author(s):  
Brigitta Tóth ◽  
Ferenc Honbolygó ◽  
Orsolya Szalárdy ◽  
Gábor Orosz ◽  
Dávid Farkas ◽  
...  

2020 ◽  
Vol 1 (3) ◽  
pp. 268-287
Author(s):  
Keelin M. Greenlaw ◽  
Sebastian Puschmann ◽  
Emily B. J. Coffey

Hearing-in-noise perception is a challenging task that is critical to human function, but how the brain accomplishes it is not well understood. A candidate mechanism proposes that the neural representation of an attended auditory stream is enhanced relative to background sound via a combination of bottom-up and top-down mechanisms. To date, few studies have compared neural representation and its task-related enhancement across frequency bands that carry different auditory information, such as a sound’s amplitude envelope (i.e., syllabic rate or rhythm; 1–9 Hz), and the fundamental frequency of periodic stimuli (i.e., pitch; >40 Hz). Furthermore, hearing-in-noise in the real world is frequently both messier and richer than the majority of tasks used in its study. In the present study, we use continuous sound excerpts that simultaneously offer predictive, visual, and spatial cues to help listeners separate the target from four acoustically similar simultaneously presented sound streams. We show that while both lower and higher frequency information about the entire sound stream is represented in the brain’s response, the to-be-attended sound stream is strongly enhanced only in the slower, lower frequency sound representations. These results are consistent with the hypothesis that attended sound representations are strengthened progressively at higher level, later processing stages, and that the interaction of multiple brain systems can aid in this process. Our findings contribute to our understanding of auditory stream separation in difficult, naturalistic listening conditions and demonstrate that pitch and envelope information can be decoded from single-channel EEG data.


Sign in / Sign up

Export Citation Format

Share Document