scholarly journals Speech Perception in Classroom Acoustics by Children With Hearing Loss and Wearing Hearing Aids

2020 ◽  
Vol 29 (1) ◽  
pp. 6-17 ◽  
Author(s):  
Frank Iglehart

Purpose The classroom acoustic standard ANSI/ASA S12.60-2010/Part 1 requires a reverberation time (RT) for children with hearing impairment of 0.3 s, shorter than its requirement of 0.6 s for children with typical hearing. While preliminary data from conference proceedings support this new RT requirement of 0.3 s, peer-reviewed data that support 0.3-s RT are not available on those wearing hearing aids. To help address this, this article compares speech perception performance by children with hearing aids in RTs, including those specified in the ANSI/ASA-2010 standard. A related clinical issue is whether assessments of speech perception conducted in near-anechoic sound booths, which may overestimate performance in reverberant classrooms, may now provide a more reliable estimate when the child is in a classroom with a short RT of 0.3 s. To address this, this study compared speech perception by children with hearing aids in a sound booth to listening in 0.3-s RT. Method Participants listened in classroom RTs of 0.3, 0.6, and 0.9 s and in a near-anechoic sound booth. All conditions also included a 21-dB range of speech-to-noise ratios (SNRs) to further represent classroom listening environments. Performance measures using the Bamford–Kowal–Bench Speech-in-Noise (BKB-SIN) test were 50% correct word recognition across these acoustic conditions, with supplementary analyses of percent correct. Results Each reduction in RT from 0.9 to 0.6 to 0.3 s significantly benefited the children's perception of speech. Scores obtained in a sound booth were significantly better than those measured in 0.3-s RT. Conclusion These results support the acoustic standard of 0.3-s RT for children with hearing impairment in learning spaces ≤ 283 m 3 , as specified in ANSI/ASA S12.60-2010/Part 1. Additionally, speech perception testing in a sound booth did not predict accurately listening ability in a classroom with 0.3-s RT. Supplemental Material https://doi.org/10.23641/asha.11356487

2016 ◽  
Vol 25 (2) ◽  
pp. 100-109 ◽  
Author(s):  
Frank Iglehart

Purpose This study measured speech perception ability in children with cochlear implants and children with typical hearing when listening across ranges of reverberation times (RTs) and speech-to-noise ratios. Method Participants listened in classroom RTs of 0.3, 0.6, and 0.9 s combined with a 21-dB range of speech-to-noise ratios. Subsets also listened in a low-reverberant audiological sound booth. Performance measures using the Bamford-Kowal-Bench Speech-in-Noise Test (Etymotic Research, Inc., 2005) were 50% correct word recognition across these acoustic conditions, with supplementary analyses of percent correct. Results Reduction in RT from 0.9 to 0.6 s benefited both groups of children. A further reduction in RT to 0.3 s provided additional benefit to the children with cochlear implants, with no further benefit or harm to those with typical hearing. Scores in the sound booth were significantly higher for the participants with implants than in the classroom. Conclusions These results support the acoustic standards of 0.6 s RT for children with typical hearing and 0.3 s RT for children with auditory issues in learning spaces (≤283 m 3 ) as specified in standards S12.60-2010/Part 1 of the American National Standards Institute /Acoustical Society of America (2010). In addition, speech perception testing in a low-reverberant booth overestimated classroom listening ability in children with cochlear implants.


2002 ◽  
Vol 11 (1) ◽  
pp. 29-41 ◽  
Author(s):  
Todd Ricketts ◽  
Paula Henry

Hearing aids currently available on the market with both omnidirectional and directional microphone modes often have reduced amplification in the low frequencies when in directional microphone mode due to better phase matching. The effects of this low-frequency gain reduction for individuals with hearing loss in the low frequencies was of primary interest. Changes in sound quality for quiet listening environments following gain compensation in the low frequencies was of secondary interest. Thirty participants were fit with bilateral in-the-ear hearing aids, which were programmed in three ways while in directional microphone mode: no-gain compensation, adaptive-gain compensation, and full-gain compensation. All participants were tested with speech in noise tasks. Participants also made sound quality judgments based on monaural recordings made from the hearing aid. Results support a need for gain compensation for individuals with low-frequency hearing loss of greater than 40 dB HL.


2013 ◽  
Vol 24 (09) ◽  
pp. 832-844 ◽  
Author(s):  
Andrea L. Pittman ◽  
Mollie M. Hiipakka

Background: Before advanced noise-management features can be recommended for use in children with hearing loss, evidence regarding their ability to use these features to optimize speech perception is necessary. Purpose: The purpose of this study was to examine the relation between children's preference for, and performance with, four combinations of noise-management features in noisy listening environments. Research Design: Children with hearing loss were asked to repeat short sentences presented in steady-state noise or in multitalker babble while wearing ear-level hearing aids. The aids were programmed with four memories having an orthogonal arrangement of two noise-management features. The children were also asked to indicate the hearing aid memory that they preferred in each of the listening conditions both initially and after a short period of use. Study Sample: Fifteen children between the ages of 8 and 12 yr with moderate hearing losses, bilaterally. Results: The children's preference for noise management aligned well with their performance for at least three of the four listening conditions. The configuration of noise-management features had little effect on speech perception with the exception of reduced performance for speech originating from behind the child while in a directional hearing aid setting. Additionally, the children's preference appeared to be governed by listening comfort, even under conditions for which a benefit was not expected such as the use of digital noise reduction in the multitalker babble conditions. Conclusions: The results serve as evidence in support of the use of noise-management features in grade-school children as young as 8 yr of age.


2020 ◽  
Vol 31 (01) ◽  
pp. 017-029
Author(s):  
Paul Reinhart ◽  
Pavel Zahorik ◽  
Pamela Souza

AbstractDigital noise reduction (DNR) processing is used in hearing aids to enhance perception in noise by classifying and suppressing the noise acoustics. However, the efficacy of DNR processing is not known under reverberant conditions where the speech-in-noise acoustics are further degraded by reverberation.The purpose of this study was to investigate acoustic and perceptual effects of DNR processing across a range of reverberant conditions for individuals with hearing impairment.This study used an experimental design to investigate the effects of varying reverberation on speech-in-noise processed with DNR.Twenty-six listeners with mild-to-moderate sensorineural hearing impairment participated in the study.Speech stimuli were combined with unmodulated broadband noise at several signal-to-noise ratios (SNRs). A range of reverberant conditions with realistic parameters were simulated, as well as an anechoic control condition without reverberation. Reverberant speech-in-noise signals were processed using a spectral subtraction DNR simulation. Signals were acoustically analyzed using a phase inversion technique to quantify improvement in SNR as a result of DNR processing. Sentence intelligibility and subjective ratings of listening effort, speech naturalness, and background noise comfort were examined with and without DNR processing across the conditions.Improvement in SNR was greatest in the anechoic control condition and decreased as the ratio of direct to reverberant energy decreased. There was no significant effect of DNR processing on speech intelligibility in the anechoic control condition, but there was a significant decrease in speech intelligibility with DNR processing in all of the reverberant conditions. Subjectively, listeners reported greater listening effort and lower speech naturalness with DNR processing in some of the reverberant conditions. Listeners reported higher background noise comfort with DNR processing only in the anechoic control condition.Results suggest that reverberation affects DNR processing using a spectral subtraction algorithm in such a way that decreases the ability of DNR to reduce noise without distorting the speech acoustics. Overall, DNR processing may be most beneficial in environments with little reverberation and that the use of DNR processing in highly reverberant environments may actually produce adverse perceptual effects. Further research is warranted using commercial hearing aids in realistic reverberant environments.


2019 ◽  
Vol 62 (2) ◽  
pp. 307-317 ◽  
Author(s):  
Jianghua Lei ◽  
Huina Gong ◽  
Liang Chen

Purpose The study was designed primarily to determine if the use of hearing aids (HAs) in individuals with hearing impairment in China would affect their speechreading performance. Method Sixty-seven young adults with hearing impairment with HAs and 78 young adults with hearing impairment without HAs completed newly developed Chinese speechreading tests targeting 3 linguistic levels (i.e., words, phrases, and sentences). Results Groups with HAs were more accurate at speechreading than groups without HA across the 3 linguistic levels. For both groups, speechreading accuracy was higher for phrases than words and sentences, and speechreading speed was slower for sentences than words and phrases. Furthermore, there was a positive correlation between years of HA use and the accuracy of speechreading performance; longer HA use was associated with more accurate speechreading. Conclusions Young HA users in China have enhanced speechreading performance over their peers with hearing impairment who are not HA users. This result argues against the perceptual dependence hypothesis that suggests greater dependence on visual information leads to improvement in visual speech perception.


1999 ◽  
Vol 42 (3) ◽  
pp. 540-552 ◽  
Author(s):  
Mark C. Flynn ◽  
Richard C. Dowell

By diminishing the role of communicative context, traditional tests of speech perception may underestimate or misrepresent the actual speech perception abilities of adults with a hearing impairment. This study investigates this contention by devising an assessment that may better simulate some aspects of "reallife" speech perception. A group of 31 participants with a severe-to-profound hearing impairment took part in a series of speech perception tests while wearing their hearing aids. The tests used question/answer or adjacency pairs, where the stimulus sentence was preceded by a question spoken by the participant. Four conditions were included: (a) where there was no initiating sentence, as in a traditional open-set speech perception test; (b) where the initiating question was neutral (e.g. "Why?"); (c) where there was a disruptive semantic relationship between the question and answer; and (d) where there was a strong contextual relationship between the question and answer. The time delay between the question and answer was also varied. Results demonstrated that in all conditions where there was a preceding question speech perception improved, and increasing the cohesion between the question and the reply improved speech perception scores. Additionally, time delay and the relatedness of the reply interacted. The effects of semantic context appeared to diminish over a 10-s period while other linguistic effects remained more constant. These results indicate the utility of simulating communicative environments within speech perception tests.


2018 ◽  
Vol 29 (07) ◽  
pp. 648-655 ◽  
Author(s):  
Gabrielle H. Saunders ◽  
Ian Odgear ◽  
Anna Cosgrove ◽  
Melissa T. Frederick

AbstractThere have been numerous recent reports on the association between hearing impairment and cognitive function, such that the cognition of adults with hearing loss is poorer relative to the cognition of adults with normal hearing (NH), even when amplification is used. However, it is not clear the extent to which this is testing artifact due to the individual with hearing loss being unable to accurately hear the test stimuli.The primary purpose of this study was to examine whether use of amplification during cognitive screening with the Montreal Cognitive Assessment (MoCA) improves performance on the MoCA. Secondarily, we investigated the effects of hearing ability on MoCA performance, by comparing the performance of individuals with and without hearing impairment.Participants were 42 individuals with hearing impairment and 19 individuals with NH. Of the individuals with hearing impairment, 22 routinely used hearing aids; 20 did not use hearing aids.Following a written informec consent process, all participants completed pure tone audiometry, speech testing in quiet (Maryland consonant-nucleus-consonant [CNC] words) and in noise (Quick Speech in Noise [QuickSIN] test), and the MoCA. The speech testing and MoCA were completed twice. Individuals with hearing impairment completed testing once unaided and once with amplification, whereas individuals with NH completed unaided testing twice.The individuals with hearing impairment performed significantly less well on the MoCA than those without hearing impairment for unaided testing, and the use of amplification did not significantly change performance. This is despite the finding that amplification significantly improved the performance of the hearing aid users on the measures of speech in quiet and speech in noise. Furthermore, there were strong correlations between MoCA score and the four frequency pure tone average, Maryland CNC score and QuickSIN, which remain moderate to strong when the analyses were adjusted for age.It is concluded that the individuals with hearing loss here performed less well on the MoCA than individuals with NH and that the use of amplification did not compensate for this performance deficit. Nonetheless, this should not be taken to suggest the use of amplification during testing is unnecessary because it might be that other unmeasured factors, such as effort required to perform or fatigue, were decreased with the use of amplification.


2004 ◽  
Vol 15 (09) ◽  
pp. 649-659 ◽  
Author(s):  
Ruth A. Bentler ◽  
Jessica L.M. Egge ◽  
Jill L. Tubbs ◽  
Andrew B. Dittberner ◽  
Gregory A. Flamme

The purpose of this study was to assess the relationship between the directivity of a directional microphone hearing aid and listener performance. Hearing aids were fit bilaterally to 19 subjects with sensorineural hearing loss, and five microphone conditions were assessed: omnidirectional, cardioid, hypercardioid, supercardioid, and "monofit," wherein the left hearing aid was set to omnidirectional and the right hearing aid to hypercardioid. Speech perception performance was assessed using the Hearing in Noise Test (HINT) and the Connected Speech Test (CST). Subjects also assessed eight domains of sound quality for three stimuli (speech in quiet, speech in noise, and music). A diffuse soundfield system composed of eight loudspeakers forming the corners of a cube was used to output the background noise for the speech perception tasks and the three stimuli used for sound quality judgments. Results indicated that there were no significant differences in the HINT or CST performance, or sound quality judgments, across the four directional microphone conditions when tested in a diffuse field. Of particular interest was the monofit condition: Performance on speech perception tests was the same whether one or two directional microphones were used.


2021 ◽  
Author(s):  
Enrico Varano ◽  
Konstantinos Vougioukas ◽  
Pingchuan Ma ◽  
Stavros Petridis ◽  
Maja Pantic ◽  
...  

Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speake's face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person's face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.


2022 ◽  
Vol 15 ◽  
Author(s):  
Enrico Varano ◽  
Konstantinos Vougioukas ◽  
Pingchuan Ma ◽  
Stavros Petridis ◽  
Maja Pantic ◽  
...  

Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speaker’s face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person’s face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer (AVSR) benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.


Sign in / Sign up

Export Citation Format

Share Document