Information Processing of Visually Presented Picture and Word Stimuli by Young Hearing-Impaired and Normal-Hearing Children

1976 ◽  
Vol 19 (4) ◽  
pp. 628-638 ◽  
Author(s):  
Ronald R. Kelly ◽  
C. Tomlinson-Keasey

Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/ word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.

2002 ◽  
Vol 45 (5) ◽  
pp. 1027-1038 ◽  
Author(s):  
Rosalie M. Uchanski ◽  
Ann E. Geers ◽  
Athanassios Protopapas

Exposure to modified speech has been shown to benefit children with languagelearning impairments with respect to their language skills (M. M. Merzenich et al., 1998; P. Tallal et al., 1996). In the study by Tallal and colleagues, the speech modification consisted of both slowing down and amplifying fast, transitional elements of speech. In this study, we examined whether the benefits of modified speech could be extended to provide intelligibility improvements for children with severe-to-profound hearing impairment who wear sensory aids. In addition, the separate effects on intelligibility of slowing down and amplifying speech were evaluated. Two groups of listeners were employed: 8 severe-to-profoundly hearingimpaired children and 5 children with normal hearing. Four speech-processing conditions were tested: (1) natural, unprocessed speech; (2) envelope-amplified speech; (3) slowed speech; and (4) both slowed and envelope-amplified speech. For each condition, three types of speech materials were used: words in sentences, isolated words, and syllable contrasts. To degrade the performance of the normal-hearing children, all testing was completed with a noise background. Results from the hearing-impaired children showed that all varieties of modified speech yielded either equivalent or poorer intelligibility than unprocessed speech. For words in sentences and isolated words, the slowing-down of speech had no effect on intelligibility scores whereas envelope amplification, both alone and combined with slowing-down, yielded significantly lower scores. Intelligibility results from normal-hearing children listening in noise were somewhat similar to those from hearing-impaired children. For isolated words, the slowing-down of speech had no effect on intelligibility whereas envelope amplification degraded intelligibility. For both subject groups, speech processing had no statistically significant effect on syllable discrimination. In summary, without extensive exposure to the speech processing conditions, children with impaired hearing and children with normal hearing listening in noise received no intelligibility advantage from either slowed speech or envelope-amplified speech.


Author(s):  
Elina Nirgianaki ◽  
Maria Bitzanaki

The present study investigates the acoustic characteristics of Greek vowels produced by hearing-impaired children with profound prelingual hearing loss and cochlear implants. The results revealed a significant difference between vowels produced by hearingimpaired children and those produced by normal-hearing ones in terms of duration. Stressed vowels were significantly longer than non-stressed for both groups, while F0, F1 and F2 did not differ significantly between the two groups for any vowel, with the exception of /a/, which had significantly higher F1 when produced by hearingimpaired children. Acoustic vowel spaces were similar for the two groups but shifted towards higher frequencies in the low-high dimension and somehow reduced in the front-back dimension for the hearing-impaired group.


QJM ◽  
2020 ◽  
Vol 113 (Supplement_1) ◽  
Author(s):  
A M Saad ◽  
M A Hegazi ◽  
M S Khodeir

Abstract Background Lip-reading is considered an important skill which varies considerably among normal hearing and hearing impaired (HI) children. It helps HI children to perceive speech, acquire spoken language and acquire phonological awareness. Speech perception is considered to be a multisensory process that involves attention to auditory signals as well as visual articulatory movements. Integration of auditory and visual signals occurs naturally and automatically in normal individuals across all ages. Many researches suggested that normal hearing children use audition as the primary sensory modality for speech perception, whereas HI children use lip-reading cues as the primary sensory modality for speech perception. Aim of the Work The aim of this study is to compare the lip-reading ability between normal and HI children. Participants and methods This is a comparative descriptive case control study. It was applied on 60 hearing impaired children (cases) and 60 normal hearing children (controls) of the same age and gender. The age range was (3-8 years). The Egyptian Arabic Lip-reading Test was applied to all children. Results There was statistically significant difference between the total mean scores of the EALRT between normal and HI children. Conclusion The results of the study proved that normal children are better lip-readers than HI children of the matched age range.


2000 ◽  
Vol 43 (4) ◽  
pp. 902-914 ◽  
Author(s):  
Patricia G. Stelmachowicz ◽  
Brenda M. Hoover ◽  
Dawna E. Lewis ◽  
Reinier W. L. Kortekaas ◽  
Andrea L. Pittman

In this study, the influence of stimulus context and audibility on sentence recognition was assessed in 60 normal-hearing children, 23 hearing-impaired children, and 20 normal-hearing adults. Performance-intensity (PI) functions were obtained for 60 semantically correct and 60 semantically anomalous sentences. For each participant, an audibility index (AI) was calculated at each presentation level, and a logistic function was fitted to rau-transformed percent-correct values to estimate the SPL and AI required to achieve 70% performance. For both types of sentences, there was a systematic age-related shift in the PI functions, suggesting that young children require a higher AI to achieve performance equivalent to that of adults. Improvement in performance with the addition of semantic context was statistically significant only for the normal-hearing 5-year-olds and adults. Data from the hearing-impaired children showed age-related trends that were similar to those of the normal-hearing children, with the majority of individual data falling within the 5th and 95th percentile of normal. The implications of these findings in terms of hearing-aid fitting strategies for young children are discussed.


1978 ◽  
Vol 21 (2) ◽  
pp. 372-386 ◽  
Author(s):  
Kathleen E. Crandall

Spontaneous sign-language samples were collected in a controlled interactive situation from 20 young hearing-impaired children and their mothers. Inflectional morphemes in the samples were described by cher attributes and classified for syntactic function within utterances. Inflectional morpheme productivity did not increase significantly with age; mean manual English morphemes per utterance did increase with age. The first six inflectional morphemes used by the children studied were the same as those used by normal-hearing children. A good predictor of the child’s use of inflectional morphemes was the mother’s use of these morphemes.


2016 ◽  
Vol 30 (3) ◽  
pp. 340-344 ◽  
Author(s):  
Narges Jafari ◽  
Michael Drinnan ◽  
Reyhane Mohamadi ◽  
Fariba Yadegari ◽  
Mandana Nourbakhsh ◽  
...  

1972 ◽  
Vol 15 (2) ◽  
pp. 413-422 ◽  
Author(s):  
Norman P. Erber

The consonants /b, d, g, k, m, n, p, t/ were presented to normal-hearing, severely hearing-impaired, and profoundly deaf children through auditory, visual, and combined auditory-visual modalities. Through lipreading alone, all three groups were able to discriminate between the places of articulation (bilabial, alveolar, velar) but not within each place category. When they received acoustic information only, normal-hearing children recognized the consonants nearly perfectly, and severely hearing-impaired children distinguished accurately between voiceless plosives, voiced plosives, and nasal consonants. However, the scores of the profoundly deaf group were low, and they perceived even voicing and nasality cues unreliably. Although both the normal-hearing and the severely hearing-impaired groups achieved nearly perfect recognition scores through simultaneous auditory-visual reception, the performance of the profoundly deaf children was only slightly better than that which they demonstrated through lipreading alone.


1971 ◽  
Vol 14 (4) ◽  
pp. 793-803 ◽  
Author(s):  
Agnes H. Ling

Ear asymmetry for dichotic digits was used in an attempt to estimate speech laterality in 19 children with impaired hearing and 19 with normal hearing. Sequences of digits were also presented monaurally. The normal-hearing group was significantly superior to the hearing-impaired in the recall of both monaural and dichotic digits. No ear advantage was observed for either group on the monaural test. Right-ear dichotic scores were significantly superior for the normal-hearing group, but intersubject variability resulted in a nonsignificant right-ear trend for the hearing-impaired group, with individuals showing marked right- or left-ear advantage. No correlation was found between degree of ear asymmetry on the dichotic test and vocabulary scores for hearing-impaired subjects. Both members of a dichotic pair were rarely reported by hearing-impaired subjects, with one digit apparently masking or suppressing the other. It was concluded that speech lateralization could not safely be inferred from dichotic digit scores of hearing-impaired children.


Sign in / Sign up

Export Citation Format

Share Document