Speech and Language Outcomes in Adults and Children with Cochlear Implants

2022 ◽  
Vol 8 (1) ◽  
pp. 299-319
Author(s):  
Terrin N. Tamati ◽  
David B. Pisoni ◽  
Aaron C. Moberly

Cochlear implants (CIs) represent a significant engineering and medical milestone in the treatment of hearing loss for both adults and children. In this review, we provide a brief overview of CI technology, describe the benefits that CIs can provide to adults and children who receive them, and discuss the specific limitations and issues faced by CI users. We emphasize the relevance of CIs to the linguistics community by demonstrating how CIs successfully provide access to spoken language. Furthermore, CI research can inform our basic understanding of spoken word recognition in adults and spoken language development in children. Linguistics research can also help us address the major clinical issue of outcome variability and motivate the development of new clinical tools to assess the unique challenges of adults and children with CIs, as well as novel interventions for individuals with poor outcomes.

Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627


2021 ◽  
Author(s):  
Kelsey Klein ◽  
Elizabeth Walker ◽  
Bob McMurray

Objective: The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access.Design: Participants were children ages 9-12 years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children’s fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical and semantic activation.Results: Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort, and increased fixations to the rhyme, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, though this delay was attributable to their delay in activating words in general, not to a distinct semantic source.Conclusions: Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical activation directly led to delayed semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children’s ability to understand connected speech in everyday life.


2019 ◽  
Vol 9 (2) ◽  
pp. 153 ◽  
Author(s):  
Frush Holt

Radical advancements in hearing technology in the last 30 years have offered some deaf and hard-of-hearing (DHH) children the adequate auditory access necessary to acquire spoken language with high-quality early intervention. However, meaningful achievement gaps in reading and spoken language persist despite the engineering marvel of modern hearing aids and cochlear implants. Moreover, there is enormous unexplained variability in spoken language and literacy outcomes. Aspects of signal processing in both hearing aids and cochlear implants are discussed as they relate to spoken language outcomes in preschool and school-age children. In suggesting areas for future research, a case is made for not only expanding the search for mechanisms of influence on outcomes outside of traditional device- and child-related factors, but also for framing the search within Biopsychosocial systems theories. This theoretical approach incorporates systems of risk factors across many levels, as well as the bidirectional and complex ways in which factors influence each other. The combination of sophisticated hearing technology and a fuller understanding of the complex environmental and biological factors that shape development will help maximize spoken language outcomes in DHH children and contribute to laying the groundwork for successful literacy and academic development.


2017 ◽  
Vol 2 (9) ◽  
pp. 10-24 ◽  
Author(s):  
Jena McDaniel ◽  
Stephen Camarata

Purpose We review the evidence for attenuating visual input during intervention to enhance auditory development and ultimately improve spoken language outcomes in children with cochlear implants. Background Isolating the auditory sense is a long-standing tradition in many approaches for teaching children with hearing loss. However, the evidence base for this practice is surprisingly limited and not straightforward. We review four bodies of evidence that inform whether or not visual input inhibits auditory development in children with cochlear implants: (a) audiovisual benefits for speech perception and understanding for individuals with typical hearing, (b) audiovisual integration development in children with typical hearing, (c) sensory deprivation and neural plasticity, and (d) audiovisual processing in individuals with hearing loss. Conclusions Although there is a compelling theoretical rationale for reducing visual input to enhance auditory development, there is also a strong theoretical argument supporting simultaneous multisensory auditory and visual input to potentially enhance outcomes in children with hearing loss. Despite widespread and long-standing practice recommendations to limit visual input, there is a paucity of evidence supporting this recommendation and no evidence that simultaneous multisensory input is deleterious to children with cochlear implants. These findings have important implications for optimizing spoken language outcomes in children with cochlear implants.


2007 ◽  
Vol 5 (4) ◽  
pp. 250-261 ◽  
Author(s):  
Karen Iler Kirk ◽  
Marcia J. Hay-Mccutcheon ◽  
Rachael Frush Holt ◽  
Sujuan Gao ◽  
Rong Qi ◽  
...  

2014 ◽  
Vol 42 (4) ◽  
pp. 843-872 ◽  
Author(s):  
SUSANNAH V. LEVI

ABSTRACTResearch with adults has shown that spoken language processing is improved when listeners are familiar with talkers' voices, known as the familiar talker advantage. The current study explored whether this ability extends to school-age children, who are still acquiring language. Children were familiarized with the voices of three German–English bilingual talkers and were tested on the speech of six bilinguals, three of whom were familiar. Results revealed that children do show improved spoken language processing when they are familiar with the talkers, but this improvement was limited to highly familiar lexical items. This restriction of the familiar talker advantage is attributed to differences in the representation of highly familiar and less familiar lexical items. In addition, children did not exhibit accent-general learning; despite having been exposed to German-accented talkers during training, there was no improvement for novel German-accented talkers.


2012 ◽  
Vol 35 (2) ◽  
pp. 333-370 ◽  
Author(s):  
SUSAN NITTROUER ◽  
JOANNA H. LOWENSTEIN

ABSTRACTCochlear implants allow many individuals with profound hearing loss to understand spoken language, even though the impoverished signals provided by these devices poorly preserve acoustic attributes long believed to support recovery of phonetic structure. Consequently, questions may be raised regarding whether traditional psycholinguistic theories rely too heavily on phonetic segments to explain linguistic processing while ignoring potential roles of other forms of acoustic structure. This study tested that possibility. Adults and children (8 years old) performed two tasks: one involving explicit segmentation, phonemic awareness, and one involving a linguistic task thought to operate more efficiently with well-defined phonetic segments, short-term memory. Stimuli were unprocessed (UP) signals, amplitude envelopes (AE) analogous to implant signals, and unprocessed signals in noise (NOI) that provided a degraded signal for comparison. Adults’ results for short-term recall were similar for UP and NOI, but worse for AE stimuli. The phonemic awareness task revealed the opposite pattern across AE and NOI. Children's results for short-term recall showed similar decrements in performance for AE and NOI compared to UP, even though only NOI stimuli showed diminished results for segmentation. Conclusions were that perhaps traditional accounts are too focused on phonetic segments, something implant designers and clinicians need to consider.


2009 ◽  
Vol 21 (1) ◽  
pp. 169-179 ◽  
Author(s):  
Chotiga Pattamadilok ◽  
Laetitia Perre ◽  
Stéphane Dufau ◽  
Johannes C. Ziegler

Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a semantic task. Participants were asked to decide whether a given word belonged to a semantic category (body parts). On no-go trials, words were presented that were either orthographically consistent or inconsistent. Orthographic inconsistency (i.e., multiple spellings of the same phonology) could occur either in the first or the second syllable. The ERP data showed a clear orthographic consistency effect that preceded lexical access and semantic effects. Moreover, the onset of the orthographic consistency effect was time-locked to the arrival of the inconsistency in a spoken word, which suggests that orthography influences spoken language in a time-dependent manner. The present data join recent evidence from brain imaging showing orthographic activation in spoken language tasks. Our results extend those findings by showing that orthographic activation occurs early and affects spoken word recognition in a semantic task that does not require the explicit processing of orthographic or phonological structure.


Sign in / Sign up

Export Citation Format

Share Document