Speaking Clearly for the Hard of Hearing III

1989 ◽  
Vol 32 (3) ◽  
pp. 600-603 ◽  
Author(s):  
M. A. Picheny ◽  
N. I. Durlach ◽  
L. D. Braida

Previous studies (Picheny, Durlach, & Braida, 1985, 1986) have demonstrated that substantial intelligibility differences exist for hearing-impaired listeners for speech spoken clearly compared to speech spoken conversationally. This paper presents the results of a probe experiment intended to determine the contribution of speaking rate to the intelligibility differences. Clear sentences were processed to have the durational properties of conversational speech, and conversational sentences were processed to have the durational properties of clear speech. Intelligibility testing with hearing-impaired listeners revealed both sets of materials to be degraded after processing. However, the degradation could not be attributable to processing artifacts because reprocessing the materials to restore their original durations produced intelligibility scores close to those observed for the unprocessed materials. We conclude that the simple processing to alter the relative durations of the speech materials was not adequate to assess the contribution of speaking rate to the intelligibility differences; further studies are proposed to address this question.

1986 ◽  
Vol 29 (4) ◽  
pp. 434-446 ◽  
Author(s):  
M. A. Picheny ◽  
N. I. Durlach ◽  
L. D. Braida

The first paper of this series (Picheny, Durlach, & Braida, 1985) presented evidence that there are substantial intelligibility differences for hearing-impaired listeners between nonsense sentences spoken in a conversational manner and spoken with the effort to produce clear speech. In this paper, we report the results of acoustic analyses performed on the conversational and clear speech. Among these results are the following. First, speaking rate decreases substantially in clear speech. This decrease is achieved both by inserting pauses between words and by lengthening the durations of individual speech sounds. Second, there are differences between the two speaking modes in the numbers and types of phonological phenomena observed. In conversational speech, vowels are modified or reduced, and word-final stop bursts are often not released. In clear speech, vowels are modified to a lesser extent, and stop bursts, as well as essentially all word-final consonants, are released. Third, the RMS intensities for obstruent sounds, particularly stop consonants, is greater in clear speech than in conversational speech. Finally, changes in the long-term spectrum are small. Thus, speaking clearly cannot be regarded as equivalent to the application of high-frequency emphasis.


2014 ◽  
Vol 57 (5) ◽  
pp. 1908-1918 ◽  
Author(s):  
Kristin J. Van Engen ◽  
Jasmine E. B. Phelps ◽  
Rajka Smiljanic ◽  
Bharath Chandrasekaran

Purpose The authors sought to investigate interactions among intelligibility-enhancing speech cues (i.e., semantic context, clearly produced speech, and visual information) across a range of masking conditions. Method Sentence recognition in noise was assessed for 29 normal-hearing listeners. Testing included semantically normal and anomalous sentences, conversational and clear speaking styles, auditory-only (AO) and audiovisual (AV) presentation modalities, and 4 different maskers (2-talker babble, 4-talker babble, 8-talker babble, and speech-shaped noise). Results Semantic context, clear speech, and visual input all improved intelligibility but also interacted with one another and with masking condition. Semantic context was beneficial across all maskers in AV conditions but only in speech-shaped noise in AO conditions. Clear speech provided the most benefit for AV speech with semantically anomalous targets. Finally, listeners were better able to take advantage of visual information for meaningful versus anomalous sentences and for clear versus conversational speech. Conclusion Because intelligibility-enhancing cues influence each other and depend on masking condition, multiple maskers and enhancement cues should be used to accurately assess individuals' speech-in-noise perception.


1985 ◽  
Vol 28 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Michael A. Picheny ◽  
Nathaniel I. Durlach ◽  
Louis D. Braida

This paper is concerned with variations in the intelligibility of speech produced for hearing-impaired listeners under two conditions. Estimates were made of the magnitude of the intelligibility differences between attempts to speak clearly and attempts to speak conversationally. Five listeners with sensorineural hearing losses were tested on groups of nonsense sentences spoken clearly and conversationally by three male talkers as a function of level and frequency-gain characteristic. The average intelligibility difference between clear and conversational speech averaged across talker was found to be 17 percentage points. To a first approximation, this difference was independent of the listener, level, and frequency-gain characteristic. Analysis of segmental-level errors was only possible for two listeners and indicated that improvements in intelligibility occurred across all phoneme classes.


Author(s):  
Su Yeon Shin ◽  
Hongyeop Oh ◽  
In-Ki Jin

Abstract Background Clear speech is an effective communication strategy to improve speech intelligibility. While clear speech in several languages has been shown to significantly benefit intelligibility among listeners with differential hearing sensitivities and across environments of different noise levels, whether these results apply to Korean clear speech is unclear on account of the language's unique acoustic and linguistic characteristics. Purpose This study aimed to measure the intelligibility benefits of Korean clear speech relative to those of conversational speech among listeners with normal hearing and hearing loss. Research Design We used a mixed-model design that included both within-subject (effects of speaking style and listening condition) and between-subject (hearing status) elements. Data Collection and Analysis We compared the rationalized arcsine unit scores, which were transformed from the number of keywords recognized and repeated, between clear and conversational speech in groups with different hearing sensitivities across five listening conditions (quiet and 10, 5, 0, and –5 dB signal-to-noise ratio) using a mixed model analysis. Results The intelligibility scores of Korean clear speech were significantly higher than those of conversational speech under most listening conditions in all groups; the former yielded increases of 6 to 32 rationalized arcsine units in intelligibility. Conclusion The present study provides information on the actual benefits of Korean clear speech for listeners with varying hearing sensitivities. Audiologists or hearing professionals may use this information to establish communication strategies for Korean patients with hearing loss.


2013 ◽  
Vol 56 (5) ◽  
pp. 1429-1440 ◽  
Author(s):  
Jennifer Lam ◽  
Kris Tjaden

Purpose The authors investigated how clear speech instructions influence sentence intelligibility. Method Twelve speakers produced sentences in habitual, clear, hearing impaired, and overenunciate conditions. Stimuli were amplitude normalized and mixed with multitalker babble for orthographic transcription by 40 listeners. The main analysis investigated percentage-correct intelligibility scores as a function of the 4 conditions and speaker sex. Additional analyses included listener response variability, individual speaker trends, and an alternate intelligibility measure: proportion of content words correct. Results Relative to the habitual condition, the overenunciate condition was associated with the greatest intelligibility benefit, followed by the hearing impaired and clear conditions. Ten speakers followed this trend. The results indicated different patterns of clear speech benefit for male and female speakers. Greater listener variability was observed for speakers with inherently low habitual intelligibility compared to speakers with inherently high habitual intelligibility. Stable proportions of content words were observed across conditions. Conclusions Clear speech instructions affected the magnitude of the intelligibility benefit. The instruction to overenunciate may be most effective in clear speech training programs. The findings may help explain the range of clear speech intelligibility benefit previously reported. Listener variability analyses suggested the importance of obtaining multiple listener judgments of intelligibility, especially for speakers with inherently low habitual intelligibility.


ORL ro ◽  
2016 ◽  
Vol 4 (1) ◽  
pp. 64-65
Author(s):  
Mădălina Georgescu ◽  
Violeta Necula ◽  
Sebastian Cozma

Hearing loss represents a frequently met sensorial handicap, which has a major and complex impact not only on the hearing-impaired person, but also on his family and society. The large number of hard-of-hearing persons justifies the acknowledgement of hearing loss as a public health issue, which oblige to appropriate health politics, to offer each hearing-impaired person health services like those in Europe. These can be obtained through: appropriate legislation for mandatory universal newborn hearing screening; national program for follow-up of hearing-impaired children up to school age; national register of hard-of-hearing persons; smooth access to rehabilitation methods; appropriate number of audiologists, trained for health services at European standards, trained through public programs of education in the field of audiology.  


2021 ◽  
Vol 25 ◽  
pp. 233121652110144
Author(s):  
Ilja Reinten ◽  
Inge De Ronde-Brons ◽  
Rolph Houben ◽  
Wouter Dreschler

Single microphone noise reduction (NR) in hearing aids can provide a subjective benefit even when there is no objective improvement in speech intelligibility. A possible explanation lies in a reduction of listening effort. Previously, we showed that response times (a proxy for listening effort) to an auditory-only dual-task were reduced by NR in normal-hearing (NH) listeners. In this study, we investigate if the results from NH listeners extend to the hearing-impaired (HI), the target group for hearing aids. In addition, we assess the relevance of the outcome measure for studying and understanding listening effort. Twelve HI subjects were asked to sum two digits of a digit triplet in noise. We measured response times to this task, as well as subjective listening effort and speech intelligibility. Stimuli were presented at three signal-to-noise ratios (SNR; –5, 0, +5 dB) and in quiet. Stimuli were processed with ideal or nonideal NR, or unprocessed. The effect of NR on response times in HI listeners was significant only in conditions where speech intelligibility was also affected (–5 dB SNR). This is in contrast to the previous results with NH listeners. There was a significant effect of SNR on response times for HI listeners. The response time measure was reasonably correlated ( R142 = 0.54) to subjective listening effort and showed a sufficient test–retest reliability. This study thus presents an objective, valid, and reliable measure for evaluating an aspect of listening effort of HI listeners.


2021 ◽  
Vol 25 ◽  
pp. 233121652097802
Author(s):  
Emmanuel Ponsot ◽  
Léo Varnet ◽  
Nicolas Wallaert ◽  
Elza Daoud ◽  
Shihab A. Shamma ◽  
...  

Spectrotemporal modulations (STM) are essential features of speech signals that make them intelligible. While their encoding has been widely investigated in neurophysiology, we still lack a full understanding of how STMs are processed at the behavioral level and how cochlear hearing loss impacts this processing. Here, we introduce a novel methodological framework based on psychophysical reverse correlation deployed in the modulation space to characterize the mechanisms underlying STM detection in noise. We derive perceptual filters for young normal-hearing and older hearing-impaired individuals performing a detection task of an elementary target STM (a given product of temporal and spectral modulations) embedded in other masking STMs. Analyzed with computational tools, our data show that both groups rely on a comparable linear (band-pass)–nonlinear processing cascade, which can be well accounted for by a temporal modulation filter bank model combined with cross-correlation against the target representation. Our results also suggest that the modulation mistuning observed for the hearing-impaired group results primarily from broader cochlear filters. Yet, we find idiosyncratic behaviors that cannot be captured by cochlear tuning alone, highlighting the need to consider variability originating from additional mechanisms. Overall, this integrated experimental-computational approach offers a principled way to assess suprathreshold processing distortions in each individual and could thus be used to further investigate interindividual differences in speech intelligibility.


Sign in / Sign up

Export Citation Format

Share Document