Lexical effects on spoken word recognition performance among Mandarin-speaking children with normal hearing and cochlear implants

2010 ◽  
Vol 74 (8) ◽  
pp. 883-890 ◽  
Author(s):  
Nan Mai Wang ◽  
Che-Ming Wu ◽  
Karen Iler Kirk
2010 ◽  
Vol 31 (1) ◽  
pp. 102-114 ◽  
Author(s):  
Vidya Krull ◽  
Sangsook Choi ◽  
Karen Iler Kirk ◽  
Lindsay Prusick ◽  
Brian French

Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627


2007 ◽  
Vol 5 (4) ◽  
pp. 250-261 ◽  
Author(s):  
Karen Iler Kirk ◽  
Marcia J. Hay-Mccutcheon ◽  
Rachael Frush Holt ◽  
Sujuan Gao ◽  
Rong Qi ◽  
...  

Author(s):  
Cynthia G. Clopper ◽  
Janet B. Pierrehumbert ◽  
Terrin N. Tamati

AbstractLexical neighborhood density is a well-known factor affecting phonological categorization in spoken word recognition. The current study examined the interaction between lexical neighborhood density and dialect variation in spoken word recognition in noise. The stimulus materials were real English words produced in two regional American English dialects. To manipulate lexical neighborhood density, target words were selected so that predicted phonological confusions across dialects resulted in real English words in the word-competitor condition and did not result in real English words in the nonword-competitor condition. Word and vowel recognition performance were more accurate in the nonword-competitor condition than the word-competitor condition for both talker dialects. An examination of the responses to specific vowels revealed the role of dialect variation in eliciting this effect. When the predicted phonological confusions were real lexical neighbors, listeners could respond with either the target word or the confusable minimal pair, and were more likely than expected to produce a minimal pair differing from the target by one vowel. When the predicted phonological confusions were not real words, however, the listeners exhibited less lexical competition and responded with the target word or a minimal pair differing by one consonant.


2012 ◽  
Vol 23 (06) ◽  
pp. 464-475 ◽  
Author(s):  
Karen Iler Kirk ◽  
Lindsay Prusick ◽  
Brian French ◽  
Chad Gotch ◽  
Laurie S. Eisenberg ◽  
...  

Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate “real-world” stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss.


2021 ◽  
Author(s):  
Kelsey Klein ◽  
Elizabeth Walker ◽  
Bob McMurray

Objective: The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access.Design: Participants were children ages 9-12 years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children’s fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical and semantic activation.Results: Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort, and increased fixations to the rhyme, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, though this delay was attributable to their delay in activating words in general, not to a distinct semantic source.Conclusions: Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical activation directly led to delayed semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children’s ability to understand connected speech in everyday life.


Sign in / Sign up

Export Citation Format

Share Document