Psychometric Functions of the One-Syllable Word Recognition with Monitored Live Voice versus Recorded Presentation for Hearing Impaired Adults

2007 ◽  
Vol 3 (2) ◽  
pp. 122-130
Author(s):  
Hye Jeong Baek ◽  
Junghak Lee
1974 ◽  
Vol 17 (2) ◽  
pp. 194-202 ◽  
Author(s):  
Norman P. Erber

A recorded list of 25 spondaic words was administered monaurally through earphones to 72 hearing-impaired children to evaluate their comprehension of “easy” speech material. The subjects ranged in age from eight to 16 years, and their average pure-tone thresholds (500-1000-2000 Hz) ranged in level from 52 to 127 dB (ANSI, 1969). Most spondee-recognition scores either were high (70 to 100* correct) or low (0 to 30% correct). The degree of overlap in thresholds between the high-scoring and the low-scoring groups differed as a function of the method used to describe the audiogram. The pure-tone average of 500-1000-2000 Hz was a good, but not perfect, predictor of spondee-recognition ability. In general, children with average pure-tone thresholds better than about 85 dB HTL (ANSI, 1969) scored high, and those with thresholds poorer than about 100 dB scored low. Spondee-recognition scores, however, could not be predicted with accuracy for children whose audiograms fell between 85 and 100 dB HTL.


2021 ◽  
Vol 2 ◽  
Author(s):  
Lasse Embøl ◽  
Carl Hutters ◽  
Andreas Junker ◽  
Daniel Reipur ◽  
Ali Adjorlu ◽  
...  

Cochlear implants (CI) enable hearing in individuals with sensorineural hearing loss, albeit with difficulties in speech perception and sound localization. In noisy environments, these difficulties are disproportionately greater for CI users than for children with no reported hearing loss. Parents of children with CIs are motivated to experience what CIs sound like, but options to do so are limited. This study proposes using virtual reality to simulate having CIs in a school setting with two contrasting settings: a noisy playground and a quiet classroom. To investigate differences between hearing conditions, an evaluation utilized a between-subjects design with 15 parents (10 female, 5 male; age M = 38.5, SD = 6.6) of children with CIs with no reported hearing loss. In the virtual environment, a word recognition and sound localization test using an open-set speech corpus compared differences between simulated unilateral CI, simulated bilateral CI, and normal hearing conditions in both settings. Results of both tests indicate that noise influences word recognition more than it influences sound localization, but ultimately affects both. Furthermore, bilateral CIs are equally to or significantly beneficial over having a simulated unilateral CI in both tests. A follow-up qualitative evaluation showed that the simulation enabled users to achieve a better understanding of what it means to be an hearing impaired child.


Sign language is a visual language that uses body postures and facial expressions. It is generally used by hearing-impaired people as a source of communication. According to the World Health Organization (WHO), around 466 million people (5% of the world population) are with hearing and speech impairment. Normal people generally do not understand this sign language and hence there is a communication gap between hearing-impaired and other people. Different phonemic scripts were developed such as HamNoSys notation that describes sign language using symbols. With the development in the field of artificial intelligence, we are now able to overcome the limitations of communication with people using different languages. Sign language translating system is the one that converts sign to text or speech whereas sign language generating system is the one that converts speech or text to sign language. Sign language generating systems were developed so that normal people can use this system to display signs to hearing-impaired people. This survey consists of a comparative study of approaches and techniques that are used to generate sign language. We have discussed general architecture and applications of the sign language generating system.


1980 ◽  
Vol 68 (S1) ◽  
pp. S5-S6
Author(s):  
Donald E. Morgan ◽  
Donald D. Dirks ◽  
Therese M. Velde

2000 ◽  
Vol 108 (5) ◽  
pp. 2602-2603
Author(s):  
Sumiko Takayanagi ◽  
Donald D. Dirks ◽  
Anahita Moshfegh ◽  
P. Douglas Noffsinger ◽  
Stephen A. Fausti

1990 ◽  
Vol 55 (3) ◽  
pp. 417-426 ◽  
Author(s):  
Randall C. Beattie ◽  
Judy A. Zipp

Characteristics of the range of intensities yielding PB Max and of the threshold for monosyllabic words (PBT) were investigated in 110 elderly subjects with mild-to-moderate sensorineural hearing loss. Word recognition functions were generated using the Auditec recordings of the CID W-22 words with 50 words per level. The results indicated that (a) the range of intensities yielding PB Max was approximately 33 dB at a level corresponding to 12% below PB Max, (b) the PB Max range decreased as the magnitude of hearing loss increased, (c) testing at the loudness discomfort level was likely to provide a more accurate estimate of PB Max than testing at most comfortable listening level, (d) word recognition scores should be obtained at a minimum of two intensities in order to estimate PB Max, (e) the PBT in dB SL re the spondaic threshold increased as the steepness of the audiogram increased, and (f) the PBT should not be considered unusual unless it exceeds the predicted value by about 14 dB.


1988 ◽  
Vol 31 (2) ◽  
pp. 265-271 ◽  
Author(s):  
Richard W. Harris ◽  
Robert H. Brey ◽  
Martin S. Robinette ◽  
Douglas M. Chabries ◽  
Richard W. Christiansen ◽  
...  

A two microphone adaptive digital noise cancellation technique was used to improve word-recognition ability of normally hearing and hearing-impaired subjects in the presence of varying amounts of multitalker speech babble noise and speech spectrum noise. Signal-to-noise ratios varied from -8 dB to + 12 dB in 4 dB increments. The adaptive noise cancellation technique resulted in reducing both the speech babble and speech spectrum noises 18 to 22 dB. This reduction in noise resulted in average improvements in word recognition, at the poorest signal-to-noise ratios, ranging from 37% to 50% for the normally hearing subjects and 27% to 40% for the hearing-impaired subjects. Improvements in word recognition in the presence of speech babble noise as a result of adaptive filtering were just as large or larger than improvements found in the presence of speech spectrum noise. The amount of improvement of word-recognition scores was most pronounced at the least favorable signal-to-noise ratios.


Sign in / Sign up

Export Citation Format

Share Document