Relations Among Linguistic and Cognitive Skills and Spoken Word Recognition in Adults With Cochlear Implants

2004 ◽  
Vol 47 (3) ◽  
pp. 496-508 ◽  
Author(s):  
Elizabeth A. Collison ◽  
Benjamin Munson ◽  
Arlene Earley Carney

This study examined spoken word recognition in adults with cochlear implants (CIs) to determine the extent to which linguistic and cognitive abilities predict variability in speech-perception performance. Both a traditional consonant-vowel-consonant (CVC)-repetition measure and a gated-word recognition measure (F. Grosjean, 1996) were used. Stimuli in the gated-word-recognition task varied in neighborhood density. Adults with CIs repeated CVC words less accurately than did age-matched adults with normal hearing sensitivity (NH). In addition, adults with CIs required more acoustic information to recognize gated words than did adults with NH. Neighborhood density had a smaller influence on gated-word recognition by adults with CIs than on recognition by adults with NH. With the exception of 1 outlying participant, standardized, norm-referenced measures of cognitive and linguistic abilities were not correlated with word-recognition measures. Taken together, these results do not support the hypothesis that cognitive and linguistic abilities predict variability in speech-perception performance in a heterogeneous group of adults with CIs. Findings are discussed in light of the potential role of auditory perception in mediating relations among cognitive and linguistic skill and spoken word recognition.

2021 ◽  
Author(s):  
Katrina Sue McClannahan ◽  
Amelia Mainardi ◽  
Austin Luor ◽  
Yi-Fang Chiu ◽  
Mitchell S. Sommers ◽  
...  

BackgroundDifficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. In background noise, however, higher-level cognitive processes play a more substantial role in successful communication. Cognitive resources are often limited in adults with dementia, which may therefore hamper word recognition. ObjectiveThe goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise.MethodsParticipants were adults aged 53–86 years with (n=16) or without (n=32) dementia symptoms as classified by a clinical dementia rating scale. Participants performed a word identification task with two levels of neighborhood density in quiet and in speech shaped noise at two signal-to-noise ratios (SNRs), +6 dB and +3 dB. Our hypothesis was that listeners with mild dementia would have more difficulty with speech perception in noise under conditions that tax cognitive resources. ResultsListeners with mild dementia had poorer speech perception accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, phonological neighborhood density was not a factor in the identification task performance for either group.ConclusionThese results affirm the difficulty that listeners with mild dementia have with spoken word recognition, both in quiet and in background noise, consistent with a key role of cognitive resources in spoken word identification. However, the impact of neighborhood density in these listeners is less clear.


Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627


2020 ◽  
Author(s):  
Sarah Elizabeth Margaret Colby ◽  
Bob McMurray

Purpose: Listening effort is quickly becoming an important metric for assessing speech perception in less-than-ideal situations. However, the relationship between the construct of listening effort and the measures used to assess it remain unclear. We compared two measures of listening effort: a cognitive dual task and a physiological pupillometry task. We sought to investigate the relationship between these measures of effort and whether engaging effort impacts speech accuracy.Method: In Experiment 1, 30 participants completed a dual task and pupillometry task that were carefully matched in stimuli and design. The dual task consisted of a spoken word recognition task and a visual match-to-sample task. In the pupillometry task, pupil size was monitored while participants completed a spoken word recognition task. Both tasks presented words at three levels of listening difficulty (unmodified, 8-channel vocoding, and 4-channel vocoding) and provided response feedback on every trial. We refined the pupillometry task in Experiment 2 (n=31); crucially, participants no longer received response feedback. Finally, we ran a new group of subjects on both tasks in Experiment 3 (n=30).Results: In Experiment 1, accuracy in the visual task decreased with increased listening difficulty in the dual task, but pupil size was sensitive to accuracy and not listening difficulty. After removing feedback in Experiment 2, changes in pupil size were predicted by listening difficulty, suggesting the task was now sensitive to engaged effort. Both tasks were sensitive to listening difficulty in Experiment 3, but there was no relationship between the tasks and neither task predicted speech accuracy.Conclusions: Consistent with previous work, we found little evidence for a relationship between different measures of listening effort. We also found no evidence that effort predicts speech accuracy, suggesting that engaging more effort does not lead to improved speech recognition. Cognitive and physiological measures of listening effort are likely sensitive to different aspects of the construct of listening effort.


Author(s):  
David B. Pisoni ◽  
Susannah V. Levi

This article examines how new approaches—coupled with previous insights—provide a new framework for questions that deal with the nature of phonological and lexical knowledge and representation, processing of stimulus variability, and perceptual learning and adaptation. First, it outlines the traditional view of speech perception and identifies some problems with assuming such a view, in which only abstract representations exist. The article then discusses some new approaches to speech perception that retain detailed information in the representations. It also considers a view which rejects abstraction altogether, but shows that such a view has difficulty dealing with a range of linguistic phenomena. After providing a brief discussion of some new directions in linguistics that encode both detailed information and abstraction, the article concludes by discussing the coupling of speech perception and spoken word recognition.


2007 ◽  
Vol 5 (4) ◽  
pp. 250-261 ◽  
Author(s):  
Karen Iler Kirk ◽  
Marcia J. Hay-Mccutcheon ◽  
Rachael Frush Holt ◽  
Sujuan Gao ◽  
Rong Qi ◽  
...  

1989 ◽  
Vol 17 (5) ◽  
pp. 525-535 ◽  
Author(s):  
Monique Radeau ◽  
Josè Morais ◽  
Agnlès Dewier

Sign in / Sign up

Export Citation Format

Share Document