The effect of perceived sound quality of speech in noisy speech perception by normal hearing and hearing impaired listeners

Author(s):  
Sara Akbarzadeh ◽  
Sungmin Lee ◽  
Fei Chen ◽  
Chin-Tuan Tan
2021 ◽  
Vol 37 (1) ◽  
Author(s):  
Mona Abdel-Fattah Hegazi ◽  
Aya Mohammed Saad ◽  
Mona Sameeh Khodeir

Abstract Background Lipreading is considered an important skill that varies considerably among normal-hearing (NH) and hearing-impaired (HI) children. It is well known that normal-hearing children use audition as the primary sensory modality for speech perception, whereas HI children use lipreading cues as the primary sensory modality for speech perception. Moreover, speech perception is a multisensory process that involves attention to auditory signals as well as visual articulatory movements, and the integration of auditory and visual signals occurs naturally and automatically in normal individuals of all ages. Most researches proved that lipreading is a natural and important skill needed for language acquisition in HI children. Lipreading also helps HI children to perceive speech, acquire spoken language, and acquire phonology. In the Arabic language, tools are deficient for assessing the lipreading ability for HI children, so this study was conducted to develop a test suitable for assessing the lipreading ability of hearing-impaired children among Arabic-speaking countries. The constructed lipreading test was administered to 160 Arabic-speaking Egyptian children including 100 typically developing NH children and 60 HI children. Participants’ responses were statistically analyzed to assess the validity and reliability and to compare the lipreading ability between the NH and HI children. Ranks of percentiles were established to provide an estimate of the lipreading ability in children. Results Statistically significant differences were found between the normal-hearing and HI children as regards all subtotal and total scores of the Arabic lipreading test, with good validity and reliability of the test. Conclusions The Arabic lipreading test is a valid and reliable test that can be applied to assess the lipreading ability among Arabic-speaking children with HI.


2018 ◽  
Vol 370 ◽  
pp. 189-200 ◽  
Author(s):  
Tine Goossens ◽  
Charlotte Vercammen ◽  
Jan Wouters ◽  
Astrid van Wieringen

1998 ◽  
Vol 103 (5) ◽  
pp. 3063-3063
Author(s):  
Carl C. Crandell ◽  
Gary W. Siebein ◽  
Martin A. Gold ◽  
Mary Jo Hasell ◽  
Philip Abbott ◽  
...  

1998 ◽  
Vol 41 (5) ◽  
pp. 1073-1087 ◽  
Author(s):  
Aaron J. Parkinson ◽  
Wendy S. Parkinson ◽  
Richard S. Tyler ◽  
Mary W. Lowder ◽  
Bruce J. Gantz

Sixteen experienced cochlear implant patients with a wide range of speechperception abilities received the SPEAK processing strategy in the Nucleus Spectra-22 cochlear implant. Speech perception was assessed in quiet and in noise with SPEAK and with the patients' previous strategies (for most, Multipeak) at the study onset, as well as after using SPEAK for 6 months. Comparisons were made within and across the two test sessions to elucidate possible learning effects. Patients were also asked to rate the strategies on seven speech recognition and sound quality scales. After 6 months' experience with SPEAK, patients showed significantly improved mean performance on a range of speech recognition measures in quiet and noise. When mean subjective ratings were compared over time there were no significant differences noted between strategies. However, many individuals rated the SPEAK strategy better for two or more of the seven subjective measures. Ratings for "appreciation of music" and "quality of my own voice" in particular were generally higher for SPEAK. Improvements were realized by patients with a wide range of speech perception abilities, including those with little or no open-set speech recognition.


QJM ◽  
2020 ◽  
Vol 113 (Supplement_1) ◽  
Author(s):  
A M Saad ◽  
M A Hegazi ◽  
M S Khodeir

Abstract Background Lip-reading is considered an important skill which varies considerably among normal hearing and hearing impaired (HI) children. It helps HI children to perceive speech, acquire spoken language and acquire phonological awareness. Speech perception is considered to be a multisensory process that involves attention to auditory signals as well as visual articulatory movements. Integration of auditory and visual signals occurs naturally and automatically in normal individuals across all ages. Many researches suggested that normal hearing children use audition as the primary sensory modality for speech perception, whereas HI children use lip-reading cues as the primary sensory modality for speech perception. Aim of the Work The aim of this study is to compare the lip-reading ability between normal and HI children. Participants and methods This is a comparative descriptive case control study. It was applied on 60 hearing impaired children (cases) and 60 normal hearing children (controls) of the same age and gender. The age range was (3-8 years). The Egyptian Arabic Lip-reading Test was applied to all children. Results There was statistically significant difference between the total mean scores of the EALRT between normal and HI children. Conclusion The results of the study proved that normal children are better lip-readers than HI children of the matched age range.


1984 ◽  
Vol 27 (4) ◽  
pp. 571-577 ◽  
Author(s):  
Igor V. Náblek

In an earlier experiment on intelligibility of amplitude-compressed speech, subjects could not hear a difference between noncompressed speech and speech under some conditions of compression. Therefore, compression conditions were determined in which the quality of the two types of speech could be distinguished. When speech average level was 10 dB above a masking noise, compression ratio (CR) was equal to 2.5, and the attack time (Ta) was 3 ms, the release time (Tr) had to be shorter than 120 ms to achieve discrimination by trained normal-hearing subjects. With longer attack times and/or higher compression ratios, the critical value of release times increased. Thus, the range in which the discrimination was observed also increased (for CR = 5 and Ta = 10 ms, the critical Tr was 360 ms). The discrimination of our hearing-impaired subjects was much worse than that of the normal-hearing subjects. For example, speech processed with CR = 10, Ta = 1 ms, and Tr = 10 ms could be distinguished from the noncompressed by only 50% of the impaired subjects.


1990 ◽  
Vol 33 (1) ◽  
pp. 163-173 ◽  
Author(s):  
Brian E. Walden ◽  
Allen A. Montgomery ◽  
Robert A. Prosek ◽  
David B. Hawkins

Intersensory biasing occurs when cues in one sensory modality influence the perception of discrepant cues in another modality. Visual biasing of auditory stop consonant perception was examined in two related experiments in an attempt to clarify the role of hearing impairment on susceptibility to visual biasing of auditory speech perception. Fourteen computer-generated acoustic approximations of consonant-vowel syllables forming a /ba-da-ga/ continuum were presented for labeling as one of the three exemplars, via audition alone and in synchrony with natural visual articulations of /ba/ and of /ga/. Labeling functions were generated for each test condition showing the percentage of /ba/, /da/, and /ga/ responses to each of the 14 synthetic syllables. The subjects of the first experiment were 15 normal-hearing and 15 hearing-impaired observers. The hearing-impaired subjects demonstrated a greater susceptibility to biasing from visual cues than did the normal-hearing subjects. In the second experiment, the auditory stimuli were presented in a low-level background noise to 15 normal-hearing observers. A comparison of their labeling responses with those from the first experiment suggested that hearing-impaired persons may develop a propensity to rely on visual cues as a result of long-term hearing impairment. The results are discussed in terms of theories of intersensory bias.


Sign in / Sign up

Export Citation Format

Share Document