scholarly journals Weighting of Acoustic Cues to a Manner Distinction by Children With and Without Hearing Loss

2015 ◽  
Vol 58 (3) ◽  
pp. 1077-1092 ◽  
Author(s):  
Susan Nittrouer ◽  
Joanna H. Lowenstein

Purpose Children must develop optimal perceptual weighting strategies for processing speech in their first language. Hearing loss can interfere with that development, especially if cochlear implants are required. The three goals of this study were to measure, for children with and without hearing loss: (a) cue weighting for a manner distinction, (b) sensitivity to those cues, and (c) real-world communication functions. Method One hundred and seven children (43 with normal hearing [NH], 17 with hearing aids [HAs], and 47 with cochlear implants [CIs]) performed several tasks: labeling of stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART), discrimination of ART, word recognition, and phonemic awareness. Results Children with hearing loss were less attentive overall to acoustic structure than children with NH. Children with CIs, but not those with HAs, weighted FRT less and ART more than children with NH. Sensitivity could not explain cue weighting. FRT cue weighting explained significant amounts of variability in word recognition and phonemic awareness; ART cue weighting did not. Conclusion Signal degradation inhibits access to spectral structure for children with CIs, but cannot explain their delayed development of optimal weighting strategies. Auditory training could strengthen the weighting of spectral cues for children with CIs, thus aiding spoken language acquisition.

2015 ◽  
Vol 58 (2) ◽  
pp. 466-480 ◽  
Author(s):  
Joanna H. Lowenstein ◽  
Susan Nittrouer

Purpose One task of childhood involves learning to optimally weight acoustic cues in the speech signal in order to recover phonemic categories. This study examined the extent to which spectral degradation, as associated with cochlear implants, might interfere. The 3 goals were to measure, for adults and children, (a) cue weighting with spectrally degraded signals, (b) sensitivity to degraded cues, and (c) word recognition for degraded signals. Method Twenty-three adults and 36 children (10 and 8 years old) labeled spectrally degraded stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART). They also discriminated degraded stimuli from FRT and ART continua, and recognized words. Results A developmental increase in the weight assigned to FRT in labeling was clearly observed, with a slight decrease in weight assigned to ART. Sensitivity to these degraded cues measured by the discrimination task could not explain variability in cue weighting. FRT cue weighting explained significant variability in word recognition; ART cue weighting did not. Conclusion Spectral degradation affects children more than adults, but that degradation cannot explain the greater diminishment in children's weighting of FRT. It is suggested that auditory training could strengthen the weighting of spectral cues for implant recipients.


Author(s):  
Paul D Hatzigiannakoglou ◽  
Areti Okalidou

<p class="Normal1" align="left">It is known that the development of auditory skills in children with hearing loss, who use assistive listening devices, requires training and practice. The aims of this research were a) to describe an auditory training software developed in order to help children with cochlear implants and/or hearing aids improve their auditory skills and monitor their performance,  and b) to demonstrate the usability of the auditory training tool. The software is mobile-based and uses VR (Virtual Reality) and Immersive technology. In order to use it, the user must wear a VR headset. This technology was adopted because these devices are considered to be innovative, and are especially popular among children. The software was tested on fourteen hearing-impaired children. Eleven of these children use a cochlear implant and three use hearing aids. The results of this research show that the children with hearing loss were able to play the game successfully. This positive outcome supports the use of VR and Immersive technology as Auditory Training Tools.</p>


Author(s):  
Laurence Bruggeman ◽  
Julien Millasseau ◽  
Ivan Yuen ◽  
Katherine Demuth

Purpose Children with hearing loss (HL), including those with hearing aids (HAs) and cochlear implants (CIs), often have difficulties contrasting words like “ b each ” versus “ p each ” and “ do g ” versus “ do ck ” due to challenges producing systematic voicing contrasts. Even when acoustic contrasts are present, these may not be perceived as such by others. This can cause miscommunication, leading to poor self-esteem and social isolation. Acoustic evidence is therefore needed to determine if these children have established distinct voicing categories before entering school and if misperceptions are due to a lack of phonological representations or due to a still-maturing implementation system. The findings should help inform more effective early intervention. Method Participants included 14 children with HL (eight HA users, five CI users, and one bimodal) and 20 with normal hearing, all English-speaking preschoolers. In an elicited imitation task, they produced consonant–vowel–consonant minimal pair words that contrasted voicing in word-initial (onset) or word-final (coda) position at all three places of articulation (PoAs). Results Overall, children with HL showed acoustically distinct voicing categories for both onsets and codas at all three PoAs. Contrasts were less systematic for codas than for onsets, as also confirmed by adults' perceptual ratings. Conclusions Preschoolers with HL produce acoustic differences for voiced versus voiceless onsets and codas, indicating distinct phonological representations for both. Nonetheless, codas were less accurately perceived by adult raters, especially when produced by CI users. This suggests a protracted development of the phonetic implementation of codas, where CI users, in particular, may benefit from targeted intervention.


Author(s):  
Jace Wolfe ◽  
Mila Duke ◽  
Sharon Miller ◽  
Erin Schafer ◽  
Christine Jones ◽  
...  

Background: For children with hearing loss, the primary goal of hearing aids is to provide improved access to the auditory environment within the limits of hearing aid technology and the child’s auditory abilities. However, there are limited data examining aided speech recognition at very low (40 dBA) and low (50 dBA) presentation levels. Purpose: Due to the paucity of studies exploring aided speech recognition at low presentation levels for children with hearing loss, the present study aimed to 1) compare aided speech recognition at different presentation levels between groups of children with normal hearing and hearing loss, 2) explore the effects of aided pure tone average (PTA) and aided Speech Intelligibility Index (SII) on aided speech recognition at low presentation levels for children with hearing loss ranging in degree from mild to severe, and 3) evaluate the effect of increasing low-level gain on aided speech recognition of children with hearing loss. Research Design: In phase 1 of this study, a two-group, repeated-measures design was used to evaluate differences in speech recognition. In phase 2 of this study, a single-group, repeated-measures design was used to evaluate the potential benefit of additional low-level hearing aid gain for low-level aided speech recognition of children with hearing loss. Study Sample: The first phase of the study included 27 school-age children with mild to severe sensorineural hearing loss and 12 school-age children with normal hearing. The second phase included eight children with mild to moderate sensorineural hearing loss. Intervention: Prior to the study, children with hearing loss were fitted binaurally with digital hearing aids. Children in the second phase were fitted binaurally with digital study hearing aids and completed a trial period with two different gain settings: 1) gain required to match hearing aid output to prescriptive targets (i.e., primary program), and 2) a 6-dB increase in overall gain for low-level inputs relative to the primary program. In both phases of this study, real-ear verification measures were completed to ensure the hearing aid output matched prescriptive targets. Data Collection and Analysis: Phase 1 included monosyllabic word recognition and syllable-final plural recognition at three presentation levels (40, 50, and 60 dBA). Phase 2 compared speech recognition performance for the same test measures and presentation levels with two differing gain prescriptions. Results and Conclusions: In phase 1 of the study, aided speech recognition was significantly poorer in children with hearing loss at all presentation levels. Higher aided SII in the better ear (55 dB SPL input) was associated with higher CNC word recognition at a 40 dBA presentation level. In phase 2, increasing the hearing aid gain for low-level inputs provided a significant improvement in syllable-final plural recognition at very low-level inputs and resulted in a non-significant trend toward better monosyllabic word recognition at very low presentation levels. Additional research is needed to document the speech recognition difficulties children with hearing aids may experience with low-level speech in the real world as well as the potential benefit or detriment of providing additional low-level hearing aid gain


2017 ◽  
Vol 21 ◽  
pp. 233121651771037 ◽  
Author(s):  
Cara L. Wong ◽  
Teresa Y. C. Ching ◽  
Linda Cupples ◽  
Laura Button ◽  
Greg Leigh ◽  
...  

2012 ◽  
Vol 23 (06) ◽  
pp. 412-421 ◽  
Author(s):  
Laurie S. Eisenberg ◽  
Karen C. Johnson ◽  
Amy S. Martinez ◽  
Leslie Visser-Dumont ◽  
Dianne Hammes Ganguly ◽  
...  

Three clinical research projects are described that are relevant to pediatric hearing loss. The three projects fall into two distinct areas. The first area emphasizes clinical studies that track developmental outcomes in children with hearing loss; one project is specific to cochlear implants and the other to hearing aids. The second area addresses speech perception test development for very young children with hearing loss. Although these two lines of research are treated as separate areas, they begin to merge as new behavioral tests become useful in developing protocols for contemporary studies that address longitudinal follow-up of children with hearing loss.


2019 ◽  
Vol 62 (9) ◽  
pp. 3234-3247 ◽  
Author(s):  
Céline Hidalgo ◽  
Jacques Pesnot-Lerousseau ◽  
Patrick Marquis ◽  
Stéphane Roman ◽  
Daniele Schön

Purpose In this study, we investigate temporal adaptation capacities of children with normal hearing and children with cochlear implants and/or hearing aids during verbal exchange. We also address the question of the efficiency of a rhythmic training on temporal adaptation during speech interaction in children with hearing loss. Method We recorded electroencephalogram data in children while they named pictures delivered on a screen, in alternation with a virtual partner. We manipulated the virtual partner's speech rate (fast vs. slow) and the regularity of alternation (regular vs. irregular). The group of children with normal hearing was tested once, and the group of children with hearing loss was tested twice: once after 30 min of auditory training and once after 30 min of rhythmic training. Results Both groups of children adjusted their speech rate to that of the virtual partner and were sensitive to the regularity of alternation with a less accurate performance following irregular turns. Moreover, irregular turns elicited a negative event-related potential in both groups, showing a detection of temporal deviancy. Notably, the amplitude of this negative component positively correlated with accuracy in the alternation task. In children with hearing loss, the effect was more pronounced and long-lasting following rhythmic training compared with auditory training. Conclusion These results are discussed in terms of temporal adaptation abilities in speech interaction and suggest the use of rhythmic training to improve these skills of children with hearing loss.


2020 ◽  
Vol 63 (2) ◽  
pp. 552-568 ◽  
Author(s):  
Benjamin Davies ◽  
Nan Xu Rattanasone ◽  
Aleisha Davis ◽  
Katherine Demuth

Purpose Normal-hearing (NH) children acquire plural morphemes at different rates, with the segmental allomorphs /–s, –z/ (e.g., cat-s ) being acquired before the syllabic allomorph /–əz/ (e.g., bus-es ). Children with hearing loss (HL) have been reported to show delays in the production of plural morphology, raising the possibility that this might be due to challenges acquiring different types of lexical/morphological representations. This study therefore examined the comprehension of plural morphology by 3- to 7-year-olds with HL and compared this with performance by their NH peers. We also investigated comprehension as a function of wearing hearing aids (HAs) versus cochlear implants (CIs). Method Participants included 129 NH children aged 3–5 years and 25 children with HL aged 3–7 years (13 with HAs, 12 with CIs). All participated in a novel word two-alternative forced-choice task presented on an iPad. The task tested comprehension of the segmental (e.g., teps, mubz ) and syllabic (e.g., kosses ) plural, as well as their singular counterparts (e.g., tep, mub, koss ). Results While the children with NH were above chance for all conditions, those with HL performed at chance. As a group, the performance of the children with HL did not improve with age. However, results suggest possible differences between children with HAs and those with CIs, where those with HAs appeared to be in the process of developing representations of consonant–vowel–consonant singulars. Conclusions Results suggest that preschoolers with HL do not yet have a robust representation of plural morphology for words they have not heard before. However, those with HAs are beginning to access the singular/plural system as they get older.


2017 ◽  
Vol 28 (04) ◽  
pp. 283-294 ◽  
Author(s):  
Rose Thomas Kalathottukaren ◽  
Suzanne C. Purdy ◽  
Elaine Ballard

Background: Auditory development in children with hearing loss, including the perception of prosody, depends on having adequate input from cochlear implants and/or hearing aids. Lack of adequate auditory stimulation can lead to delayed speech and language development. Nevertheless, prosody perception and production in people with hearing loss have received less attention than other aspects of language. The perception of auditory information conveyed through prosody using variations in the pitch, amplitude, and duration of speech is not usually evaluated clinically. Purpose: This study (1) compared prosody perception and production abilities in children with hearing loss and children with normal hearing; and (2) investigated the effect of age, hearing level, and musicality on prosody perception. Research Design: Participants were 16 children with hearing loss and 16 typically developing controls matched for age and gender. Fifteen of the children with hearing loss were tested while using amplification (n = 9 hearing aids, n = 6 cochlear implants). Six receptive subtests of the Profiling Elements of Prosody in Speech-Communication (PEPS-C), the Child Paralanguage subtest of Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA 2), and Contour and Interval subtests of the Montreal Battery of Evaluation of Amusia (MBEA) were used. Audio recordings of the children’s reading samples were rated using a perceptual prosody rating scale by nine experienced listeners who were blinded to the children’s hearing status. Study Sample: Thirty two children, 16 with hearing loss (mean age = 8.71 yr) and 16 age- and gender-matched typically developing children with normal hearing (mean age = 8.87 yr). Data Collection and Analysis: Assessments were completed in one session lasting 1–2 hours in a quiet room. Test items were presented using a laptop computer through loudspeaker at a comfortable listening level. For children with hearing loss using hearing instruments, all tests were completed with hearing devices set at their everyday listening setting. Results: All PEPS-C subtests and total scores were significantly lower for children with hearing loss compared to controls (p < 0.05). The hearing loss group performed more poorly than the control group in recognizing happy, sad, and fearful emotions in the DANVA 2 subtest. Musicality (composite MBEA scores and musical experience) was significantly correlated with prosody perception scores, but this link was not evident in the regression analyses. Regression modeling showed that age and hearing level (better ear pure-tone average) accounted for 55.4% and 56.7% of the variance in PEPS-C and DANVA 2 total scores, respectively. There was greater variability for the ratings of pitch, pitch variation, and overall impression of prosody in the hearing loss group compared to control group. Prosody perception (PEPS-C and DANVA 2 total scores) and ratings of prosody production were not correlated. Conclusions: Children with hearing loss aged 7–12 yr had significant difficulties in understanding different aspects of prosody and were rated as having more atypical prosody overall than controls. These findings suggest that clinical assessment and speech–language therapy services for children with hearing loss should be expanded to target prosodic difficulties. Future studies should investigate whether musical training is beneficial for improving receptive prosody skills.


Sign in / Sign up

Export Citation Format

Share Document