scholarly journals Subcortical Synchrony: A Bottleneck When Listening in Noise

2021 ◽  
Vol 74 (11) ◽  
pp. 26-27
Author(s):  
Nina Kraus ◽  
Travis White-Schwoch
Keyword(s):  
2002 ◽  
Vol 45 (5) ◽  
pp. 1027-1038 ◽  
Author(s):  
Rosalie M. Uchanski ◽  
Ann E. Geers ◽  
Athanassios Protopapas

Exposure to modified speech has been shown to benefit children with languagelearning impairments with respect to their language skills (M. M. Merzenich et al., 1998; P. Tallal et al., 1996). In the study by Tallal and colleagues, the speech modification consisted of both slowing down and amplifying fast, transitional elements of speech. In this study, we examined whether the benefits of modified speech could be extended to provide intelligibility improvements for children with severe-to-profound hearing impairment who wear sensory aids. In addition, the separate effects on intelligibility of slowing down and amplifying speech were evaluated. Two groups of listeners were employed: 8 severe-to-profoundly hearingimpaired children and 5 children with normal hearing. Four speech-processing conditions were tested: (1) natural, unprocessed speech; (2) envelope-amplified speech; (3) slowed speech; and (4) both slowed and envelope-amplified speech. For each condition, three types of speech materials were used: words in sentences, isolated words, and syllable contrasts. To degrade the performance of the normal-hearing children, all testing was completed with a noise background. Results from the hearing-impaired children showed that all varieties of modified speech yielded either equivalent or poorer intelligibility than unprocessed speech. For words in sentences and isolated words, the slowing-down of speech had no effect on intelligibility scores whereas envelope amplification, both alone and combined with slowing-down, yielded significantly lower scores. Intelligibility results from normal-hearing children listening in noise were somewhat similar to those from hearing-impaired children. For isolated words, the slowing-down of speech had no effect on intelligibility whereas envelope amplification degraded intelligibility. For both subject groups, speech processing had no statistically significant effect on syllable discrimination. In summary, without extensive exposure to the speech processing conditions, children with impaired hearing and children with normal hearing listening in noise received no intelligibility advantage from either slowed speech or envelope-amplified speech.


2018 ◽  
Vol 41 (24) ◽  
pp. 2918-2926
Author(s):  
Benoît Jutras ◽  
Lyne Lafontaine ◽  
Marie-Pier East ◽  
Marjorie Noël

2020 ◽  
Vol 10 (7) ◽  
pp. 428
Author(s):  
Aparna Rao ◽  
Tess K. Koerner ◽  
Brandon Madsen ◽  
Yang Zhang

This electrophysiological study investigated the role of the medial olivocochlear (MOC) efferents in listening in noise. Both ears of eleven normal-hearing adult participants were tested. The physiological tests consisted of transient-evoked otoacoustic emission (TEOAE) inhibition and the measurement of cortical event-related potentials (ERPs). The mismatch negativity (MMN) and P300 responses were obtained in passive and active listening tasks, respectively. Behavioral responses for the word recognition in noise test were also analyzed. Consistent with previous findings, the TEOAE data showed significant inhibition in the presence of contralateral acoustic stimulation. However, performance in the word recognition in noise test was comparable for the two conditions (i.e., without contralateral stimulation and with contralateral stimulation). Peak latencies and peak amplitudes of MMN and P300 did not show changes with contralateral stimulation. Behavioral performance was also maintained in the P300 task. Together, the results show that the peripheral auditory efferent effects captured via otoacoustic emission (OAE) inhibition might not necessarily be reflected in measures of central cortical processing and behavioral performance. As the MOC effects may not play a role in all listening situations in adults, the functional significance of the cochlear effects of the medial olivocochlear efferents and the optimal conditions conducive to corresponding effects in behavioral and cortical responses remain to be elucidated.


2018 ◽  
Vol 71 (5) ◽  
pp. 44
Author(s):  
Nina Kraus ◽  
Travis White-Schwoch
Keyword(s):  

2015 ◽  
Vol 24 (4) ◽  
pp. 477-486 ◽  
Author(s):  
Douglas P. Sladen ◽  
Todd. A. Ricketts

Purpose Several studies have been devoted to understanding the frequency information available to adult users of cochlear implants when listening in quiet. The objective of this study was to construct frequency importance functions for a group of adults with cochlear implants and a group of adults with normal hearing both in quiet and in a +10 dB signal-to-noise ratio. Method Two groups of adults, 1 with cochlear implants and 1 with normal hearing, were asked to identify nonsense syllables in quiet and in the presence of 6-talker babble while “holes” were systematically created in the speech spectrum. Frequency importance functions were constructed. Results Results showed that adults with normal hearing placed greater weight on bands 1, 3, and 4 than on bands 2, 5, and 6, whereas adults with cochlear implants placed equal weight on all bands. The frequency importance functions for each group did not differ between listening in quiet and listening in noise. Conclusions Adults with cochlear implants assign perceptual weight toward different frequency bands, though the weight assignment does not differ between quiet and noisy conditions. Generalizing these results to the broader population of adults with implants is constrained by a small sample size.


2012 ◽  
Vol 23 (02) ◽  
pp. 081-091 ◽  
Author(s):  
Jessica Banh ◽  
Gurjit Singh ◽  
M. Kathleen Pichora-Fuller

Background: Age-related declines in auditory and cognitive processing may contribute to the difficulties with listening in noise that are often reported by older adults. Such difficulties are reported even by those who have relatively good audiograms that could be considered “normal” for their age (ISO 7029-2000 [ISO, 2000]). The Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse and Noble, 2004) is a questionnaire developed to measure a listener's self-reported ability to hear in a variety of everyday situations, such as those that are challenging for older adults, and it can provide insights into the possible contributions of auditory and cognitive factors to their listening difficulties. The SSQ has been shown to be a sensitive and reliable questionnaire to detect benefits associated with the use of different hearing technologies and potentially other forms of intervention. Establishing how age-matched listeners with audiograms “normal” for their age rate the items on the SSQ could enable an extension of its use in audiological assessment and in setting rehabilitative goals. Purpose: The main purpose of this study was to investigate how younger and older adults who passed audiometric screening and who had thresholds considered to be “normal” for their age responded on the SSQ. It was also of interest to compare these results to those reported previously for older listeners with hearing loss in an attempt to tease out the relative effects of age and hearing loss. Study Sample: The SSQ was administered to 48 younger (mean age = 19 yr; SD = 1.0) and 48 older (mean age = 70 yr, SD = 4.1) adults with clinically normal audiometric thresholds below 4 kHz. The younger adults were recruited through an introductory psychology course, and the older adults were volunteers from the local community. Data Collection and Analysis: Both age groups completed the SSQ. The differences between the groups were analyzed. Correlations were used to compare the pattern of results across items for the two age groups in the present study and to assess the relationship between SSQ scores and objective measures of hearing. Comparisons were also made to published results for older adults with hearing loss. Results: The pattern of reported difficulty across items was similar for both age groups, but younger adults had significantly higher scores than older adults on 42 of the 46 items. On average, younger adults scored 8.8 (SD = 0.6) out of 10 and older adults scored 7.7 (SD = 1.2) out of 10. By comparison, scores of 5.5 (SD = 1.9) have been reported for older adults (mean age = 71 yr, SD = 8.1) with moderate hearing loss (Gatehouse and Noble, 2004). Conclusions: By establishing the best scores that could reasonably be expected from younger and older adults with “normal” hearing thresholds, these results provide clinicians with information that should assist them in setting realistic targets for interventions for adults of different ages.


2010 ◽  
Vol 21 (07) ◽  
pp. 441-451 ◽  
Author(s):  
René H. Gifford ◽  
Lawrence J. Revit

Background: Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. Purpose: To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Research Design: Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Study Sample: Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Intervention: Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam™ preprocessing (Cochlear Corporation) or the T-Mic® accessory option (Advanced Bionics). Data Collection and Analysis: In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested “Everyday,” “Noise,” and “Focus” preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. Results: The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Conclusion: Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments.


2020 ◽  
Vol 125 (4) ◽  
pp. 287-303 ◽  
Author(s):  
Kacie Dunham ◽  
Jacob I. Feldman ◽  
Yupeng Liu ◽  
Margaret Cassidy ◽  
Julie G. Conrad ◽  
...  

Abstract Children with autism spectrum disorder (ASD) display differences in multisensory function as quantified by several different measures. This study estimated the stability of variables derived from commonly used measures of multisensory function in school-aged children with ASD. Participants completed: a simultaneity judgment task for audiovisual speech, tasks designed to elicit the McGurk effect, listening-in-noise tasks, electroencephalographic recordings, and eye-tracking tasks. Results indicate the stability of indices derived from tasks tapping multisensory processing is variable. These findings have important implications for measurement in future research. Averaging scores across repeated observations will often be required to obtain acceptably stable estimates and, thus, to increase the likelihood of detecting effects of interest, as it relates to multisensory processing in children with ASD.


2000 ◽  
Vol Volume 21 (Number 2) ◽  
pp. 0103-0122 ◽  
Author(s):  
David Preves

2016 ◽  
Vol 59 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Victoria C. P. Knowland ◽  
Sam Evans ◽  
Caroline Snell ◽  
Stuart Rosen

Purpose The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. Method In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker. Results Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception. Conclusion Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.


Sign in / Sign up

Export Citation Format

Share Document