scholarly journals All Cues Are Not Created Equal: The Case for Facilitating the Acquisition of Typical Weighting Strategies in Children With Hearing Loss

2015 ◽  
Vol 58 (2) ◽  
pp. 466-480 ◽  
Author(s):  
Joanna H. Lowenstein ◽  
Susan Nittrouer

Purpose One task of childhood involves learning to optimally weight acoustic cues in the speech signal in order to recover phonemic categories. This study examined the extent to which spectral degradation, as associated with cochlear implants, might interfere. The 3 goals were to measure, for adults and children, (a) cue weighting with spectrally degraded signals, (b) sensitivity to degraded cues, and (c) word recognition for degraded signals. Method Twenty-three adults and 36 children (10 and 8 years old) labeled spectrally degraded stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART). They also discriminated degraded stimuli from FRT and ART continua, and recognized words. Results A developmental increase in the weight assigned to FRT in labeling was clearly observed, with a slight decrease in weight assigned to ART. Sensitivity to these degraded cues measured by the discrimination task could not explain variability in cue weighting. FRT cue weighting explained significant variability in word recognition; ART cue weighting did not. Conclusion Spectral degradation affects children more than adults, but that degradation cannot explain the greater diminishment in children's weighting of FRT. It is suggested that auditory training could strengthen the weighting of spectral cues for implant recipients.

2015 ◽  
Vol 58 (3) ◽  
pp. 1077-1092 ◽  
Author(s):  
Susan Nittrouer ◽  
Joanna H. Lowenstein

Purpose Children must develop optimal perceptual weighting strategies for processing speech in their first language. Hearing loss can interfere with that development, especially if cochlear implants are required. The three goals of this study were to measure, for children with and without hearing loss: (a) cue weighting for a manner distinction, (b) sensitivity to those cues, and (c) real-world communication functions. Method One hundred and seven children (43 with normal hearing [NH], 17 with hearing aids [HAs], and 47 with cochlear implants [CIs]) performed several tasks: labeling of stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART), discrimination of ART, word recognition, and phonemic awareness. Results Children with hearing loss were less attentive overall to acoustic structure than children with NH. Children with CIs, but not those with HAs, weighted FRT less and ART more than children with NH. Sensitivity could not explain cue weighting. FRT cue weighting explained significant amounts of variability in word recognition and phonemic awareness; ART cue weighting did not. Conclusion Signal degradation inhibits access to spectral structure for children with CIs, but cannot explain their delayed development of optimal weighting strategies. Auditory training could strengthen the weighting of spectral cues for children with CIs, thus aiding spoken language acquisition.


Author(s):  
Laurence Bruggeman ◽  
Julien Millasseau ◽  
Ivan Yuen ◽  
Katherine Demuth

Purpose Children with hearing loss (HL), including those with hearing aids (HAs) and cochlear implants (CIs), often have difficulties contrasting words like “ b each ” versus “ p each ” and “ do g ” versus “ do ck ” due to challenges producing systematic voicing contrasts. Even when acoustic contrasts are present, these may not be perceived as such by others. This can cause miscommunication, leading to poor self-esteem and social isolation. Acoustic evidence is therefore needed to determine if these children have established distinct voicing categories before entering school and if misperceptions are due to a lack of phonological representations or due to a still-maturing implementation system. The findings should help inform more effective early intervention. Method Participants included 14 children with HL (eight HA users, five CI users, and one bimodal) and 20 with normal hearing, all English-speaking preschoolers. In an elicited imitation task, they produced consonant–vowel–consonant minimal pair words that contrasted voicing in word-initial (onset) or word-final (coda) position at all three places of articulation (PoAs). Results Overall, children with HL showed acoustically distinct voicing categories for both onsets and codas at all three PoAs. Contrasts were less systematic for codas than for onsets, as also confirmed by adults' perceptual ratings. Conclusions Preschoolers with HL produce acoustic differences for voiced versus voiceless onsets and codas, indicating distinct phonological representations for both. Nonetheless, codas were less accurately perceived by adult raters, especially when produced by CI users. This suggests a protracted development of the phonetic implementation of codas, where CI users, in particular, may benefit from targeted intervention.


2012 ◽  
Vol 23 (03) ◽  
pp. 206-221 ◽  
Author(s):  
Ann M. Rothpletz ◽  
Frederic L. Wightman ◽  
Doris J. Kistler

Background: Self-monitoring has been shown to be an essential skill for various aspects of our lives, including our health, education, and interpersonal relationships. Likewise, the ability to monitor one's speech reception in noisy environments may be a fundamental skill for communication, particularly for those who are often confronted with challenging listening environments, such as students and children with hearing loss. Purpose: The purpose of this project was to determine if normal-hearing children, normal-hearing adults, and children with cochlear implants can monitor their listening ability in noise and recognize when they are not able to perceive spoken messages. Research Design: Participants were administered an Objective-Subjective listening task in which their subjective judgments of their ability to understand sentences from the Coordinate Response Measure corpus presented in speech spectrum noise were compared to their objective performance on the same task. Study Sample: Participants included 41 normal-hearing children, 35 normal-hearing adults, and 10 children with cochlear implants. Data Collection and Analysis: On the Objective-Subjective listening task, the level of the masker noise remained constant at 63 dB SPL, while the level of the target sentences varied over a 12 dB range in a block of trials. Psychometric functions, relating proportion correct (Objective condition) and proportion perceived as intelligible (Subjective condition) to target/masker ratio (T/M), were estimated for each participant. Thresholds were defined as the T/M required to produce 51% correct (Objective condition) and 51% perceived as intelligible (Subjective condition). Discrepancy scores between listeners’ threshold estimates in the Objective and Subjective conditions served as an index of self-monitoring ability. In addition, the normal-hearing children were administered tests of cognitive skills and academic achievement, and results from these measures were compared to findings on the Objective-Subjective listening task. Results: Nearly half of the children with normal hearing significantly overestimated their listening in noise ability on the Objective-Subjective listening task, compared to less than 9% of the adults. There was a significant correlation between age and results on the Objective-Subjective task, indicating that the younger children in the sample (age 7–12 yr) tended to overestimate their listening ability more than the adolescents and adults. Among the children with cochlear implants, eight of the 10 participants significantly overestimated their listening ability (as compared to 13 of the 24 normal-hearing children in the same age range). We did not find a significant relationship between results on the Objective-Subjective listening task and performance on the given measures of academic achievement or intelligence. Conclusions: Findings from this study suggest that many children with normal hearing and children with cochlear implants often fail to recognize when they encounter conditions in which their listening ability is compromised. These results may have practical implications for classroom learning, particularly for children with hearing loss in mainstream settings.


Author(s):  
Paul D Hatzigiannakoglou ◽  
Areti Okalidou

<p class="Normal1" align="left">It is known that the development of auditory skills in children with hearing loss, who use assistive listening devices, requires training and practice. The aims of this research were a) to describe an auditory training software developed in order to help children with cochlear implants and/or hearing aids improve their auditory skills and monitor their performance,  and b) to demonstrate the usability of the auditory training tool. The software is mobile-based and uses VR (Virtual Reality) and Immersive technology. In order to use it, the user must wear a VR headset. This technology was adopted because these devices are considered to be innovative, and are especially popular among children. The software was tested on fourteen hearing-impaired children. Eleven of these children use a cochlear implant and three use hearing aids. The results of this research show that the children with hearing loss were able to play the game successfully. This positive outcome supports the use of VR and Immersive technology as Auditory Training Tools.</p>


1991 ◽  
Vol 34 (3) ◽  
pp. 671-678 ◽  
Author(s):  
Joan E. Sussman

This investigation examined the response strategies and discrimination accuracy of adults and children aged 5–10 as the ratio of same to different trials was varied across three conditions of a “change/no-change” discrimination task. The conditions varied as follows: (a) a ratio of one-third same to two-thirds different trials (33% same), (b) an equal ratio of same to different trials (50% same), and (c) a ratio of two-thirds same to one-third different trials (67% same). Stimuli were synthetic consonant-vowel syllables that changed along a place of articulation dimension by formant frequency transition. Results showed that all subjects changed their response strategies depending on the ratio of same-to-different trials. The most lax response pattern was observed for the 50% same condition, and the most conservative pattern was observed for the 67% same condition. Adult response patterns were most conservative across condition. Differences in discrimination accuracy as measured by P(C) were found, with the largest difference in the 5- to 6-year-old group and the smallest change in the adult group. These findings suggest that children’s response strategies, like those of adults, can be manipulated by changing the ratio of same-to-different trials. Furthermore, interpretation of sensitivity measures must be referenced to task variables such as the ratio of same-to-different trials.


Author(s):  
Yung-Ting Tsou ◽  
Boya Li ◽  
Carin H Wiefferink ◽  
Johan H M Frijns ◽  
Carolien Rieffe

AbstractEmpathy enables people to share, understand, and show concern for others’ emotions. However, this capacity may be more difficult to acquire for children with hearing loss, due to limited social access, and the effect of hearing on empathic maturation has been unexplored. This four-wave longitudinal study investigated the development of empathy in children with and without hearing loss, and how this development is associated with early symptoms of psychopathology. Seventy-one children with hearing loss and cochlear implants (CI), and 272 typically-hearing (TH) children, participated (aged 1–5 years at Time 1). Parents rated their children’s empathic skills (affective empathy, attention to others’ emotions, prosocial actions, and emotion acknowledgment) and psychopathological symptoms (internalizing and externalizing behaviors). Children with CI and TH children were rated similarly on most of the empathic skills. Yet, fewer prosocial actions were reported in children with CI than in TH children. In both groups, affective empathy decreased with age, while prosocial actions and emotion acknowledgment increased with age and stabilized when children entered primary schools. Attention to emotions increased with age in children with CI, yet remained stable in TH children. Moreover, higher levels of affective empathy, lower levels of emotion acknowledgment, and a larger increase in attention to emotions over time were associated with more psychopathological symptoms in both groups. These findings highlight the importance of social access from which children with CI can learn to process others’ emotions more adaptively. Notably, interventions for psychopathology that tackle empathic responses may be beneficial for both groups, alike.


2021 ◽  
pp. 1-19
Author(s):  
Julien MILLASSEAU ◽  
Ivan YUEN ◽  
Laurence BRUGGEMAN ◽  
Katherine DEMUTH

Abstract While voicing contrasts in word-onset position are acquired relatively early, much less is known about how and when they are acquired in word-coda position, where accurate production of these contrasts is also critical for distinguishing words (e.g., do g vs. do ck ). This study examined how the acoustic cues to coda voicing contrasts are realized in the speech of 4-year-old Australian English-speaking children. The results showed that children used similar acoustic cues to those of adults, including longer vowel duration and more frequent voice bar for voiced stops, and longer closure and burst durations for voiceless stops along with more frequent irregular pitch periods. This suggests that 4-year-olds have acquired productive use of the acoustic cues to coda voicing contrasts, though implementations are not yet fully adult-like. The findings have implications for understanding the development of phonological contrasts in populations for whom these may be challenging, such as children with hearing loss.


2016 ◽  
Vol 37 (1) ◽  
pp. 14-26 ◽  
Author(s):  
Aaron C. Moberly ◽  
Joanna H. Lowenstein ◽  
Susan Nittrouer

1992 ◽  
Vol 35 (1) ◽  
pp. 192-200 ◽  
Author(s):  
Michele L. Steffens ◽  
Rebecca E. Eilers ◽  
Karen Gross-Glenn ◽  
Bonnie Jallad

Speech perception was investigated in a carefully selected group of adult subjects with familial dyslexia. Perception of three synthetic speech continua was studied: /a/-//, in which steady-state spectral cues distinguished the vowel stimuli; /ba/-/da/, in which rapidly changing spectral cues were varied; and /sta/-/sa/, in which a temporal cue, silence duration, was systematically varied. These three continua, which differed with respect to the nature of the acoustic cues discriminating between pairs, were used to assess subjects’ abilities to use steady state, dynamic, and temporal cues. Dyslexic and normal readers participated in one identification and two discrimination tasks for each continuum. Results suggest that dyslexic readers required greater silence duration than normal readers to shift their perception from /sa/ to /sta/. In addition, although the dyslexic subjects were able to label and discriminate the synthetic speech continua, they did not necessarily use the acoustic cues in the same manner as normal readers, and their overall performance was generally less accurate.


Sign in / Sign up

Export Citation Format

Share Document