Korean Clear Speech Improves Speech Intelligibility for Individuals with Normal Hearing and Individuals with Hearing Loss

Author(s):  
Su Yeon Shin ◽  
Hongyeop Oh ◽  
In-Ki Jin

Abstract Background Clear speech is an effective communication strategy to improve speech intelligibility. While clear speech in several languages has been shown to significantly benefit intelligibility among listeners with differential hearing sensitivities and across environments of different noise levels, whether these results apply to Korean clear speech is unclear on account of the language's unique acoustic and linguistic characteristics. Purpose This study aimed to measure the intelligibility benefits of Korean clear speech relative to those of conversational speech among listeners with normal hearing and hearing loss. Research Design We used a mixed-model design that included both within-subject (effects of speaking style and listening condition) and between-subject (hearing status) elements. Data Collection and Analysis We compared the rationalized arcsine unit scores, which were transformed from the number of keywords recognized and repeated, between clear and conversational speech in groups with different hearing sensitivities across five listening conditions (quiet and 10, 5, 0, and –5 dB signal-to-noise ratio) using a mixed model analysis. Results The intelligibility scores of Korean clear speech were significantly higher than those of conversational speech under most listening conditions in all groups; the former yielded increases of 6 to 32 rationalized arcsine units in intelligibility. Conclusion The present study provides information on the actual benefits of Korean clear speech for listeners with varying hearing sensitivities. Audiologists or hearing professionals may use this information to establish communication strategies for Korean patients with hearing loss.

2012 ◽  
Vol 55 (3) ◽  
pp. 779-790 ◽  
Author(s):  
Sarah Hargus Ferguson

Purpose To establish the range of talker variability for vowel intelligibility in clear versus conversational speech for older adults with hearing loss and to determine whether talkers who produced a clear speech benefit for young listeners with normal hearing also did so for older adults with hearing loss. Method Clear and conversational vowels in /bVd/ context produced by 41 talkers were presented in noise for identification by 40 older (ages 65–87 years) adults with sloping sensorineural hearing loss. Results Vowel intelligibility within each speaking style and the size of the clear speech benefit varied widely among talkers. The clear speech benefit was equivalent to that enjoyed by young listeners with normal hearing in an earlier study. Most talkers who had produced a clear speech benefit for young listeners with normal hearing also did so for the older listeners with hearing loss in the present study. However, effects of talker gender differed between listeners with normal hearing and listeners with hearing loss. Conclusion The clear speech vowel intelligibility benefit generated for listeners with hearing loss varied considerably among talkers. Most talkers who produced a clear speech benefit for normal-hearing listeners also produced a benefit for listeners with hearing loss.


2009 ◽  
Vol 20 (01) ◽  
pp. 028-039 ◽  
Author(s):  
Elizabeth M. Adams ◽  
Robert E. Moore

Purpose: To study the effect of noise on speech rate judgment and signal-to-noise ratio threshold (SNR50) at different speech rates (slow, preferred, and fast). Research Design: Speech rate judgment and SNR50 tasks were completed in a normal-hearing condition and a simulated hearing-loss condition. Study Sample: Twenty-four female and six male young, normal-hearing participants. Results: Speech rate judgment was not affected by background noise regardless of hearing condition. Results of the SNR50 task indicated that, as speech rate increased, performance decreased for both hearing conditions. There was a moderate correlation between speech rate judgment and SNR50 with the various speech rates, such that as judgment of speech rate increased from too slow to too fast, performance deteriorated. Conclusions: These findings can be used to support the need for counseling patients and their families about the potential advantages to using average speech rates or rates that are slightly slowed while conversing in the presence of background noise.


2005 ◽  
Vol 16 (03) ◽  
pp. 157-171 ◽  
Author(s):  
Rachel Caissie ◽  
Melanie McNuttn Campbell ◽  
Wendy L. Frenette ◽  
Lori Scott ◽  
Illona Howell ◽  
...  

Spouses of persons with hearing loss served as talkers to examine the benefits of clear speech intervention. One talker received intervention on clear speech. A second talker was simply instructed to speak clearly. Each talker was recorded reading sentences in three conditions: conversational speech, clear speech one week postintervention, and clear speech one month postintervention. Speech acoustic measures were obtained. Then the sentences were presented to subjects with normal hearing and subjects with hearing loss to measure speech recognition. Results showed that simply asking a talker to speak clearly was effective in eliciting clear speech; however, providing intervention yielded changes in more speech parameters, more stable changes, and better speech recognition. When listening to the talker who received intervention, subjects with hearing loss achieved the same performance as subjects with normal hearing. However, they performed worse than subjects with normal hearing when listening to the talker who received clear speech instructions only. Individuals with hearing loss would receive speech recognition benefits if their partners were provided with clear speech intervention.


2011 ◽  
Vol 22 (02) ◽  
pp. 081-092 ◽  
Author(s):  
Lauren R. Hulecki ◽  
Susan A. Small

Background: Bone-conduction thresholds have been used in audiologic assessments of both infants and adults to differentiate between conductive and sensorineural hearing losses. However, air- and bone-conduction thresholds estimated for infants with normal hearing using physiological measures have identified an “air–bone gap” in the low frequencies that does not result from conductive hearing impairment but, rather, from maturational differences in sensitivity. This maturational air–bone gap appears to be present up to at least 2 yr of age. Because most infants older than 6 mo of age are clinically assessed behaviorally, rather than physiologically, it is necessary to determine whether a similar maturational air–bone gap is present for behavioral air- and bone-conduction thresholds. Purpose: The purpose of this study was to estimate behavioral bone-conduction thresholds for infants using a standard clinical visual reinforcement audiometry (VRA) protocol to determine whether frequency-dependent maturational patterns exist as previously reported for physiological bone-conduction thresholds. Research Design: Behavioral bone-conduction minimum response levels were estimated at 500, 1000, 2000, and 4000 Hz using VRA for each participant. Study Sample: Young (7–15 mo; N = 17) and older (18–30 mo; N = 20) groups of infants were assessed. All infants were screened and considered to be at low risk for hearing loss. Data Collection and Analysis: Preliminary “normal levels” were determined by calculating the 90th percentile for responses present as a cumulative percentage. Mean bone-conduction thresholds were compared and analyzed using a mixed-model analysis of variance across frequency and age group. Linear regression analysis was also performed to assess the effect of age on bone-conduction thresholds. Results: Results of this study indicate that, when measured behaviorally, infants under 30 mo of age show frequency-dependent bone-conduction thresholds whereby their responses at 500 and 1000 Hz are significantly better than those at 2000 and 4000 Hz. However, thresholds obtained from the younger group of infants (mean age of 10.6 mo) were not significantly different from those obtained from the older group of infants (mean age of 23.0 mo) at any frequency. Conclusions: The findings of the present study are similar to the results obtained from previous physiological studies. Compared to previously documented air-conduction thresholds of infants using similar VRA techniques, a maturational air–bone gap is observed in the low frequencies. Therefore, differences between infant and adult bone-conduction thresholds persist until at least 30 mo of age. As a result, different “normal levels” should be used when assessing bone-conduction hearing sensitivity of infants using behavioral methods.


2020 ◽  
Vol 5 (1) ◽  
pp. 36-39
Author(s):  
Mariya Yu. Boboshko ◽  
Irina P. Berdnikova ◽  
Natalya V. Maltzeva

Objectives -to determine the normative data of sentence speech intelligibility in a free sound field and to estimate the applicability of the Russian Matrix Sentence test (RuMatrix) for assessment of the hearing aid fitting benefit. Material and methods. 10 people with normal hearing and 28 users of hearing aids with moderate to severe sensorineural hearing loss were involved in the study. RuMatrix test both in quiet and in noise was performed in a free sound field. All patients filled in the COSI questionnaire. Results. The hearing impaired patients were divided into two subgroups: the 1st with high and the 2nd with low hearing aid benefit, according to the COSI questionnaire. In the 1st subgroup, the threshold for the sentence intelligibility in quiet was 34.9 ± 6.4 dB SPL, and in noise -3.3 ± 1.4 dB SNR, in the 2nd subgroup 41.7 ± 11.5 dB SPL and 0.15 ± 3.45 dB SNR, respectively. The significant difference between the data of both subgroups and the norm was registered (p


2020 ◽  
Vol 31 (05) ◽  
pp. 354-362
Author(s):  
Paula Folkeard ◽  
Marlene Bagatto ◽  
Susan Scollie

Abstract Background Hearing aid prescriptive methods are a commonly recommended component of evidence-based preferred practice guidelines and are often implemented in the hearing aid programming software. Previous studies evaluating hearing aid manufacturers' software-derived fittings to prescriptions have shown significant deviations from targets. However, few such studies examined the accuracy of software-derived fittings for the Desired Sensation Level (DSL) v5.0 prescription. Purpose The purpose of this study was to evaluate the accuracy of software-derived fittings to the DSL v5.0 prescription, across a range of hearing aid brands, audiograms, and test levels. Research Design This study is a prospective chart review with simulated cases. Data Collection and Analysis A set of software-derived fittings were created for a six-month-old test case, across audiograms ranging from mild to profound. The aided output from each fitting was verified in the test box at 55-, 65-, 75-, and 90-dB SPL, and compared with DSL v5.0 child targets. The deviations from target across frequencies 250-6000 Hz were calculated, together with the root-mean-square error (RMSE) from target. The aided Speech Intelligibility Index (SII) values generated for the speech passages at 55- and 65-dB SPL were compared with published norms. Study Sample Thirteen behind-the-ear style hearing aids from eight manufacturers were tested. Results The amount of deviation per frequency was dependent on the test level and degree of hearing loss. Most software-derived fittings for mild-to-moderately severe hearing losses fell within ± 5 dB of the target for most frequencies. RMSE results revealed more than 84% of those hearing aid fittings for the mild-to-moderate hearing losses were within 5 dB at all test levels. Fittings for severe to profound hearing losses had the greatest deviation from target and RMSE. Aided SII values for the mild-to-moderate audiograms fell within the normative range for DSL pediatric fittings, although they fell within the lower portion of the distribution. For more severe losses, SII values for some hearing aids fell below the normative range. Conclusions In this study, use of the software-derived manufacturers' fittings based on the DSL v5.0 pediatric targets set most hearing aids within a clinically acceptable range around the prescribed target, particularly for mild-to-moderate hearing losses. However, it is likely that clinician adjustment based on verification of hearing aid output would be required to optimize the fit to target, maximize aided SII, and ensure appropriate audibility across all degrees of hearing loss.


2016 ◽  
Vol 59 (1) ◽  
pp. 110-121 ◽  
Author(s):  
Marc Brennan ◽  
Ryan McCreery ◽  
Judy Kopun ◽  
Dawna Lewis ◽  
Joshua Alexander ◽  
...  

Purpose This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. Results Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. Conclusions The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.


2016 ◽  
Vol 59 (3) ◽  
pp. 590-599 ◽  
Author(s):  
Mary Rudner ◽  
Sushmit Mishra ◽  
Stefan Stenfelt ◽  
Thomas Lunner ◽  
Jerker Rönnberg

Purpose Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. Results Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. Conclusions We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.


2015 ◽  
Vol 26 (06) ◽  
pp. 572-581 ◽  
Author(s):  
Stanley Sheft ◽  
Min-Yu Cheng ◽  
Valeriy Shafiro

Background: Past work has shown that low-rate frequency modulation (FM) may help preserve signal coherence, aid segmentation at word and syllable boundaries, and benefit speech intelligibility in the presence of a masker. Purpose: This study evaluated whether difficulties in speech perception by cochlear implant (CI) users relate to a deficit in the ability to discriminate among stochastic low-rate patterns of FM. Research Design: This is a correlational study assessing the association between the ability to discriminate stochastic patterns of low-rate FM and the intelligibility of speech in noise. Study Sample: Thirteen postlingually deafened adult CI users participated in this study. Data Collection and Analysis: Using modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, thresholds were measured in terms of frequency excursion both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio in the presence of a speech-babble masker. Speech perception ability was assessed in the presence of the same speech-babble masker. Relationships were evaluated with Pearson product–moment correlation analysis with correction for family-wise error, and commonality analysis to determine the unique and common contributions across psychoacoustic variables to the association with speech ability. Results: Significant correlations were obtained between masked speech intelligibility and three metrics of FM discrimination involving either signal-to-noise ratio or stimulus duration, with shared variance among the three measures accounting for much of the effect. Compared to past results from young normal-hearing adults and older adults with either normal hearing or a mild-to-moderate hearing loss, mean FM discrimination thresholds obtained from CI users were higher in all conditions. Conclusions: The ability to process the pattern of frequency excursions of stochastic FM may, in part, have a common basis with speech perception in noise. Discrimination of differences in the temporally distributed place coding of the stimulus could serve as this common basis for CI users.


2021 ◽  
Author(s):  
Hoyoung Yi ◽  
Ashly Pingsterhaus ◽  
Woonyoung Song

The coronavirus pandemic has resulted in recommended/required use of a face mask in public. The use of a face mask compromises communication, especially in the presence of competing noise. It is crucial to measure potential adverse effect(s) of wearing face masks on speech intelligibility in communication contexts where excessive background noise occurs to lead to solutions for this communication challenge. Accordingly, effects of wearing transparent face masks and using clear speech to support better verbal communication was evaluated here. We evaluated listener word identification scores in the following four conditions: (1) type of masking (i.e., no mask, transparent mask, and disposable paper mask), (2) presentation mode (i.e., auditory only and audiovisual), (3) speaker speaking style (i.e., conversational speech and clear speech), and (4) with two types of background noise (i.e., speech shaped noise and four-talker babble at negative 5 signal to noise ratio levels). Results showed that in the presence of noise, listeners performed less well when the speaker wore a disposable paper mask or a transparent mask compared to wearing no mask. Listeners correctly identified more words in the audiovisual when listening to clear speech. Results indicate the combination of face masks and the presence of background noise impact speech intelligibility negatively for listeners. Transparent masks facilitate the ability to understand target sentences by providing visual information. Use of clear speech was shown to alleviate challenging communication situations including lack of visual cues and reduced acoustic signal.


Sign in / Sign up

Export Citation Format

Share Document