Optimum Reverberation for Speech Intelligibility for Normal and Hearing-Impaired Listeners in Realistic Classrooms Using Auralization

2007 ◽  
Vol 14 (3) ◽  
pp. 163-177 ◽  
Author(s):  
Wonyoung Yang ◽  
Murray Hodgson

The objective of this study was to use auralization techniques to investigate the optimal reverberation for speech intelligibility for normal-hearing and hearing-impaired adult listeners in classrooms with non-diffuse sound fields. This extended a previous study involving rooms with diffuse sound fields to more realistic rooms. Modified Rhyme Test (MRT) signals were auralized in six virtual classroom configurations with different reverberation times. Each classroom contained a speech source, a listener at a receiver position, and a noise source located between the talker and the listener. Two speech- and noise-source output-level differences (0 and +4 dB) were tested. Subjects performed speech-intelligibility tests in the virtual classrooms to identify the reverberation time that gave the best results in each case. For both normal and hearing-impaired listeners, the optimal reverberation time was generally non-zero, and increased with decreased speech-to-noise level difference. Hearing-impaired subjects apparently required more early energy than normal-hearing subjects. The optimal reverberation time for speech intelligibility in classrooms is not necessarily zero, as is commonly believed. The optimal value is generally non-zero, and varies with the room, the locations of the speech and noise sources and the listener, and the noise level.

1976 ◽  
Vol 19 (2) ◽  
pp. 279-289 ◽  
Author(s):  
Randall B. Monsen

Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F 2 transitions may be an important factor in the low intelligibility of the speech of the deaf.


Author(s):  
Paris Binos

Vocants are precursors to speech and are facially neutral. The presence of these speechlike vocalizations was evident during the precursors to mature phonology called “protophones”. The prosodic feature of duration of the nuclei plays a crucial role in the shift of prelexical to mature speech, since speech intelligibility is closely related to the control of duration. The aim of this work is to determine whether cochlear implants (CIs) positively trigger language acquisition and the development of verbal skills. Recent literature findings are compared and discussed with the performance of two Greek congenitally hearing-impaired infants who were matched with three normal-hearing (NH) infants. This work highlighted an important weakness of the prosodic abilities of young infants with CIs.


2021 ◽  
Author(s):  
Marlies Gillis ◽  
Lien Decruy ◽  
Jonas Vanthornhout ◽  
Tom Francart

AbstractWe investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers.Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both population; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed.Our results support that the neural response latency indicates the efficiency of neural speech processing. Hearing-impaired listeners process speech in silence less efficiently then normal-hearing listeners. Our results suggest that this reduction in neural speech processing efficiency is a gradual effect which occurs as hearing deteriorates. Moreover, the efficiency of neural speech processing in hearing-impaired listeners is already at its lowest level when listening to speech in quiet, while normal-hearing listeners show a further decrease in efficiently when the noise level increases.From our results, it is apparent that sound amplification does not solve hearing loss. Even when intelligibility is apparently perfect, hearing-impaired listeners process speech less efficiently.


1988 ◽  
Vol 31 (2) ◽  
pp. 166-177 ◽  
Author(s):  
Alf Gabrielsson ◽  
Bo N. Schenkman ◽  
Björn Hagerman

Four speech programs and two music programs were reproduced using five different frequency responses: one flat and the others combinations of reductions at lower frequencies and/or increases at higher frequencies. Twelve hearing impaired (HI) and 8 normal hearing (NH) subjects listened monaurally to the reproductions at comfortable listening level and judged the sound quality on seven perceptual scales and a scale for total impression. Speech intelligibility was measured for phonetically balanced (PB) word lists and for sentences in noise. Significant differences among the reproductions appeared in practically all scales. The most preferred system was characterized by a fiat response at lower frequencies and a 6 dB/octave increase thereafter. There were certain differences between the NH and HI listeners in the judgments of the other systems. The intelligibility of PB word lists did not differ among the systems, and the S/N threshold for the sentences in noise only distinguished the flat response as worse than all others for the HI listeners. There was little correspondence between intelligibility measures and sound quality measures. The latter provided more information and distinctions among systems.


Sign in / Sign up

Export Citation Format

Share Document