stream segregation
Recently Published Documents


TOTAL DOCUMENTS

291
(FIVE YEARS 41)

H-INDEX

40
(FIVE YEARS 3)

2022 ◽  
Vol 15 ◽  
Author(s):  
Yonghee Oh ◽  
Jillian C. Zuwala ◽  
Caitlin M. Salvagno ◽  
Grace A. Tilbrook

In multi-talker listening environments, the culmination of different voice streams may lead to the distortion of each source’s individual message, causing deficits in comprehension. Voice characteristics, such as pitch and timbre, are major dimensions of auditory perception and play a vital role in grouping and segregating incoming sounds based on their acoustic properties. The current study investigated how pitch and timbre cues (determined by fundamental frequency, notated as F0, and spectral slope, respectively) can affect perceptual integration and segregation of complex-tone sequences within an auditory streaming paradigm. Twenty normal-hearing listeners participated in a traditional auditory streaming experiment using two alternating sequences of harmonic tone complexes A and B with manipulating F0 and spectral slope. Grouping ranges, the F0/spectral slope ranges over which auditory grouping occurs, were measured with various F0/spectral slope differences between tones A and B. Results demonstrated that the grouping ranges were maximized in the absence of the F0/spectral slope differences between tones A and B and decreased by 2 times as their differences increased to ±1-semitone F0 and ±1-dB/octave spectral slope. In other words, increased differences in either F0 or spectral slope allowed listeners to more easily distinguish between harmonic stimuli, and thus group them together less. These findings suggest that pitch/timbre difference cues play an important role in how we perceive harmonic sounds in an auditory stream, representing our ability to group or segregate human voices in a multi-talker listening environment.


2022 ◽  
Author(s):  
Sarah Anne Sauvé ◽  
Jeremy Marozeau ◽  
Benjamin Zendel

Auditory stream segregation, or separating sounds into their respective sources, and tracking them over time is a fundamental auditory ability. Previous research has separately explored the impacts of aging and musicianship on the ability to separate and follow auditory streams. The current study evaluated the simultaneous effects of age and musicianship on auditory streaming induced by three physical features: intensity, spectral envelope and temporal envelope. In the first study, older and younger musicians and non-musicians with normal hearing identified deviants in a four-note melody interleaved with distractors that were more or less similar to the melody in terms of intensity, spectral envelope and temporal envelope. In the second study, older and younger musicians and non-musicians participated in a dissimilarity rating paradigm with pairs of melodies that differed along the same three features. Results suggested that auditory streaming skills are maintained in older adults but that older adults rely on intensity more than younger adults while musicianship is associated with increased sensitivity to spectral and temporal envelope, acoustic features that are typically less effective for stream segregation, particularly in older adults.


2022 ◽  
Vol 26 ◽  
pp. 233121652110661
Author(s):  
Jennifer J. Lentz ◽  
Larry E. Humes ◽  
Gary R. Kidd

This study was designed to examine age effects on various auditory perceptual skills using a large group of listeners (155 adults, 121 aged 60–88 years and 34 aged 18–30 years), while controlling for the factors of hearing loss and working memory (WM). All subjects completed 3 measures of WM, 7 psychoacoustic tasks (24 conditions) and a hearing assessment. Psychophysical measures were selected to tap phenomena thought to be mediated by higher-level auditory function and included modulation detection, modulation detection interference, informational masking (IM), masking level difference (MLD), anisochrony detection, harmonic mistuning, and stream segregation. Principal-components analysis (PCA) was applied to each psychoacoustic test. For 6 of the 7 tasks, a single component represented performance across the multiple stimulus conditions well, whereas the modulation-detection interference (MDI) task required two components to do so. The effect of age was analyzed using a general linear model applied to each psychoacoustic component. Once hearing loss and WM were accounted for as covariates in the analyses, estimated marginal mean thresholds were lower for older adults on tasks based on temporal processing. When evaluated separately, hearing loss led to poorer performance on roughly 1/2 the tasks and declines in WM accounted for poorer performance on 6 of the 8 psychoacoustic components. These results make clear the need to interpret age-group differences in performance on psychoacoustic tasks in light of cognitive declines commonly associated with aging, and point to hearing loss and cognitive declines as negatively influencing auditory perceptual skills.


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 1271
Author(s):  
Arivudainambi Pitchaimuthu ◽  
Eshwari Ananth ◽  
Jayashree S Bhat ◽  
Somashekara Haralakatta Shivananjappa

Background: Children with reading disabilities (RD) exhibit difficulty in perceiving speech in background noise due to poor auditory stream segregation. There is a dearth of literature on measures of temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities to assess auditory stream segregation in children with reading disabilities. Hence the present study compared temporal fine structure sensitivity (TFS) and concurrent vowel perception abilities between children with and without reading deficits. Method: The present research consisted of a total number of 30 participants, 15 children with reading disabilities (RD) and fifteen typically developing (TD) children within the age range of 7-14 years and were designated as Group 1 and Group 2 respectively. Both groups were matched for age, grade, and classroom curricular instructions. The groups were evaluated for TFS and concurrent vowel perception abilities and the performance was compared using independent ‘t’ test and repeated measure ANOVA respectively. Results: Results revealed that the children with RD performed significantly (p < 0.001) poorer than TD children on both TFS and concurrent vowel identification task. On concurrent vowel identification tasks, there was no significant interaction found between reading ability and F0 difference suggesting that the trend was similar in both the groups. Conclusion: The study concludes that the children with RD show poor temporal fine structure sensitivity and concurrent vowel identification scores compared to age and grade matched TD children owing to poor auditory stream segregation in children with RD.


2021 ◽  
Author(s):  
Christian Brodbeck ◽  
Jonathan Z. Simon

AbstractVoice pitch carries linguistic as well as non-linguistic information. Previous studies have described cortical tracking of voice pitch in clean speech, with responses reflecting both pitch strength and pitch value. However, pitch is also a powerful cue for auditory stream segregation, especially when competing streams have pitch differing in fundamental frequency, as is the case when multiple speakers talk simultaneously. We therefore investigated how cortical speech pitch tracking is affected in the presence of a second, task-irrelevant speaker. We analyzed human magnetoencephalography (MEG) responses to continuous narrative speech, presented either as a single talker in a quiet background, or as a two-talker mixture of a male and a female speaker. In clean speech, voice pitch was associated with a right-dominant response, peaking at a latency of around 100 ms, consistent with previous EEG and ECoG results. The response tracked both the presence of pitch as well as the relative value of the speaker’s fundamental frequency. In the two-talker mixture, pitch of the attended speaker was tracked bilaterally, regardless of whether or not there was simultaneously present pitch in the speech of the irrelevant speaker. Pitch tracking for the irrelevant speaker was reduced: only the right hemisphere still significantly tracked pitch of the unattended speaker, and only during intervals in which no pitch was present in the attended talker’s speech. Taken together, these results suggest that pitch-based segregation of multiple speakers, at least as measured by macroscopic cortical tracking, is not entirely automatic but strongly dependent on selective attention.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lars Hausfeld ◽  
Niels R. Disbergen ◽  
Giancarlo Valente ◽  
Robert J. Zatorre ◽  
Elia Formisano

Numerous neuroimaging studies demonstrated that the auditory cortex tracks ongoing speech and that, in multi-speaker environments, tracking of the attended speaker is enhanced compared to the other irrelevant speakers. In contrast to speech, multi-instrument music can be appreciated by attending not only on its individual entities (i.e., segregation) but also on multiple instruments simultaneously (i.e., integration). We investigated the neural correlates of these two modes of music listening using electroencephalography (EEG) and sound envelope tracking. To this end, we presented uniquely composed music pieces played by two instruments, a bassoon and a cello, in combination with a previously validated music auditory scene analysis behavioral paradigm (Disbergen et al., 2018). Similar to results obtained through selective listening tasks for speech, relevant instruments could be reconstructed better than irrelevant ones during the segregation task. A delay-specific analysis showed higher reconstruction for the relevant instrument during a middle-latency window for both the bassoon and cello and during a late window for the bassoon. During the integration task, we did not observe significant attentional modulation when reconstructing the overall music envelope. Subsequent analyses indicated that this null result might be due to the heterogeneous strategies listeners employ during the integration task. Overall, our results suggest that subsequent to a common processing stage, top-down modulations consistently enhance the relevant instrument’s representation during an instrument segregation task, whereas such an enhancement is not observed during an instrument integration task. These findings extend previous results from speech tracking to the tracking of multi-instrument music and, furthermore, inform current theories on polyphonic music perception.


Author(s):  
Neha Banerjee ◽  
Prashanth Prabhu

Background and Aim: The central auditory nervous system has the ability to perceptually group similar sounds and segregates different sounds called auditory stream segregation or auditory streaming or auditory scene analysis. Identification of a change in spectral profile when the amplitude of a component of complex tone is changed is referred to as Spectral profile analysis. It serves as an important cue in auditory stream segregation as the spectra of the sound source vary. The aim of the study was to assess auditory stream segregation in individuals with cochlear pathology (CP) and auditory neuropathy spectrum disorder. Methods: In the present study, three groups of participants were included. Experimental groups included 21 ears in each group with cochlear hearing loss or auditory neuropathy spectrum disorders (ANSD) and control group with 21 ears with normal hearing. Profile analysis was asse­ssed using "mlp" toolbox, which implements a maximum likelihood procedure in MATLAB. It was assessed at four frequencies (250 Hz, 500 Hz, 750 Hz, and 1000 Hz) for all three groups. Results: The results of the study indicate that the profile analysis threshold (at all four frequ­encies) was significantly poorer for individuals with CP or ANSD compared to the control group. Although, cochlear pathology group performed better than ANSD group. Conclusion: This could be because of poor spec­tral and temporal processing due to loss of outer hair cells at the level of the basilar membrane in cochlear pathology patients and due to the demyelination of auditory neurons in individuals with ANSD. Keywords: Auditory stream segregation; auditory scene analysis; spectral profiling; spectral profile analysis; cochlear pathology; auditory neuropathy spectrum disorders


2021 ◽  
pp. 1-11
Author(s):  
Saransh Jain ◽  
Riya Cherian ◽  
Nuggehalli P. Nataraja ◽  
Vijay Kumar Narne

Purpose Around 80%–93% of the individuals with tinnitus have hearing loss. Researchers have found that tinnitus pitch was related to the frequencies of hearing loss, but unclear about the relationship between tinnitus pitch and audiometry edge frequency. The comorbidity of tinnitus and speech perception in noise problems had also been reported, but the relationship between tinnitus pitch and speech perception in noise had seldom been investigated. This study was designed to estimate the relationship between tinnitus pitch, audiogram edge frequency, and speech perception in noise. The speech perception in noise was measured using auditory stream segregation paradigm. Method Thirteen individuals with bilateral mild-to-severe tonal tinnitus and minimal-to-mild cochlear hearing loss were selected. Thirteen individuals with hearing loss without tinnitus were also selected. The audiogram of each participant with tinnitus was matched with that of the participant without tinnitus. Tinnitus pitch of the participants with tinnitus was measured and compared with audiogram edge frequency. The stream segregation thresholds were calculated at the participants' admitted tinnitus pitch and one octave below the tinnitus pitch. The stream segregation thresholds were estimated at fission and fusion boundary using pure-tone stimuli in ABA paradigm. Results High correlation between tinnitus pitch and audiogram edge frequency was noted. Overall stream segregation thresholds were higher for individuals with tinnitus. Higher thresholds indicated poorer stream segregation abilities. Within tinnitus participants, the thresholds were significantly lesser at frequency corresponding to admitted tinnitus pitch than at one octave below the tinnitus pitch. Conclusions The information from this study may be helpful in educating the patients about the relationship between hearing loss and tinnitus. The findings may also account for speech-perception-in-noise difficulties often reported by the individuals with tinnitus.


2021 ◽  
Vol 10 (10) ◽  
pp. 2093
Author(s):  
Agathe Pralus ◽  
Ruben Hermann ◽  
Fanny Cholvy ◽  
Pierre-Emmanuel Aguera ◽  
Annie Moulin ◽  
...  

In the case of hearing loss, cochlear implants (CI) allow for the restoration of hearing. Despite the advantages of CIs for speech perception, CI users still complain about their poor perception of their auditory environment. Aiming to assess non-verbal auditory perception in CI users, we developed five listening tests. These tests measure pitch change detection, pitch direction identification, pitch short-term memory, auditory stream segregation, and emotional prosody recognition, along with perceived intensity ratings. In order to test the potential benefit of visual cues for pitch processing, the three pitch tests included half of the trials with visual indications to perform the task. We tested 10 normal-hearing (NH) participants with material being presented as original and vocoded sounds, and 10 post-lingually deaf CI users. With the vocoded sounds, the NH participants had reduced scores for the detection of small pitch differences, and reduced emotion recognition and streaming abilities compared to the original sounds. Similarly, the CI users had deficits for small differences in the pitch change detection task and emotion recognition, as well as a decreased streaming capacity. Overall, this assessment allows for the rapid detection of specific patterns of non-verbal auditory perception deficits. The current findings also open new perspectives about how to enhance pitch perception capacities using visual cues.


Sign in / Sign up

Export Citation Format

Share Document