scholarly journals Individual Differences in Lexical Access Among Cochlear Implant Users

2020 ◽  
Vol 63 (1) ◽  
pp. 286-304
Author(s):  
Leanne Nagels ◽  
Roelien Bastiaanse ◽  
Deniz Başkent ◽  
Anita Wagner

Purpose The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word–nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word–nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word–nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation. Supplemental Material https://doi.org/10.23641/asha.11368106

2015 ◽  
Vol 10 (2) ◽  
pp. 247-270 ◽  
Author(s):  
Svetlana V. Cook ◽  
Kira Gor

Previous research on phonological priming in a Lexical Decision Task (LDT) has demonstrated that second language (L2) learners do not show inhibition typical for native (L1) speakers that results from lexical competition, but rather a reversed effect – facilitation (Gor, Cook, & Jackson, 2010). The present study investigates the source of the reversed priming effect and addresses two possible causes: a deficit in lexical representations and a processing constraint. Twenty-three advanced learners of Russian participated in two experiments. The monolingual Russian LDT task with priming addressed the processing constraint by manipulating the interstimulus interval (ISI, 350 ms and 500 ms). The translation task evaluated the robustness of lexical representations at both the phonolexical level (whole-word phonological representation) and the level of form-to-meaning mapping, thereby addressing the lexical deficit. L2 learners did not benefit from an increased ISI, indicating lack of support for the processing constraint. However, the study, found evidence for the representational deficit: when L2 familiarity with the words is controlled and L2 representations are robust, L2 learners demonstrate native-like processing accompanied by inhibition; however, when the words have fragmented (or fuzzy) representations, L2 lexical access is unfaithful and is accompanied by reduced lexical competition leading to facilitation effects.


2019 ◽  
Vol 23 ◽  
pp. 233121651983662 ◽  
Author(s):  
Robert H. Pierzycki ◽  
Charlotte Corner ◽  
Claire A. Fielden ◽  
Pádraig T. Kitterick

Clinical observations suggest that tinnitus may interfere with programming cochlear implants (CIs), the process of optimizing the transmission of acoustic information to support speech perception with a CI. Despite tinnitus being highly prevalent among CI users, its effects and impact on CI programming are obscure. This study characterized the nature, time-course, and impact of tinnitus effects encountered by audiologists and patients during programming appointments. Semistructured interviews with six CI audiologists were analyzed thematically to identify tinnitus effects on programming and related coping strategies. Cross-sectional surveys with 67 adult CI patients with tinnitus and 20 CI audiologists in the United Kingdom examined the prevalence and time-course of those effects. Programming parameters established at CI activation appointments of 10 patients with tinnitus were compared with those of 10 patients without tinnitus. On average, 80% of audiologists and 45% of patients reported that tinnitus makes measurements of threshold (T) levels more difficult because patients confuse their tinnitus with CI stimulation. Difficulties appeared most common at CI activation appointments, at which T levels were significantly higher in patients with tinnitus. On average, 26% of patients reported being afraid of “loud” CI stimulation worsening tinnitus, affecting measurements of loudest comfortable (C) stimulation levels, and 34% of audiologists reported observing similar effects. Patients and audiologists reported that tinnitus makes programming appointments more difficult and tiresome for patients. The findings suggest that specific programming strategies may be needed during CI programming with tinnitus, but further research is required to assess the potential impact on outcomes including speech perception.


2001 ◽  
Vol 16 (5-6) ◽  
pp. 507-534 ◽  
Author(s):  
Delphine Dahan ◽  
James S. Magnuson ◽  
Michael K. Tanenhaus ◽  
Ellen M. Hogan

2020 ◽  
Author(s):  
Francis Xavier Smith ◽  
Bob McMurray

ObjectivesOne key challenge in word recognition is the temporary ambiguity in the signal created by the fact that speech unfolds over time. Research with normal hearing (NH) listeners reveals that this temporary ambiguity is resolved through incremental processing of the signal and competition among possible lexical candidates. Post-lingually deafened cochlear implant (CI) users show similar incremental processing and competition to NH listeners but with slight delays. However, even brief delays could lead to drastic changes when compounded across multiple words in a sentence. This study asks whether words presented in non-informative sentence contexts are processed differently than words presented in isolation and whether any differences are shared among NH listeners and CI users or if the groups exhibit different patterns.DesignAcross two visual world paradigm experiments, listeners heard words presented either in isolation or in non-informative sentence contexts (“click on the…”). Listeners selected the picture corresponding to the target word from among four items including the target word (e.g., mustard), a cohort competitor (e.g., mustache), a rhyme competitor (e.g., custard), and an unrelated item (e.g., penguin). During this task, eye movements were tracked as an index of the relative lexical activation of each object type during word recognition. Subjects included 65 CI users and 48 NH controls across both experiments. ResultsBoth CI users and the NH controls were largely accurate at recognizing the words both in sentence contexts and in isolation. The time course of lexical activation (indexed by the fixations) differed substantially between groups. CI users were delayed in fixating the target relative to NH controls. Additionally, CI users showed less competition from cohorts (while previous studies have often found increased competition) compared to NH controls. However, CI users took longer to suppress the cohort and suppressed it less fully than the NH controls. For both CI users and NH controls, embedding words in sentences led to more immediacy in lexical access as observed by increases in cohort competition relative to when words were presented in isolation. However, CI users were not differentially affected by the sentencesConclusionsUnlike prior work, in both sentences and in isolated words CI users appeared to exhibit “wait-and-see” strategy, in which lexical access is delayed to minimize early competition. However, they simultaneously sustain competitor activation late in the trial possibly to preserve flexibility. This hybrid profile has not been observed previously. Both CI users and NH controls more heavily weight early information when target words are presented in sentence contexts. However, CI users (but not NH listeners) also commit less fully to the target when words are presented in sentence context potentially keeping options open if they need to recover from a misperception. This mix of patterns reflects a lexical system that is extremely flexible and adapts to fit the needs of a listener.


2017 ◽  
Vol 39 (1) ◽  
pp. 225-256 ◽  
Author(s):  
TESSA SPÄTGENS ◽  
ROB SCHOONEN

ABSTRACTUsing a semantic priming experiment, the influence of lexical access and knowledge of semantic relations on reading comprehension was studied in Dutch monolingual and bilingual minority children. Both context-independent semantic relations in the form of category coordinates and context-dependent semantic relations involving concepts that co-occur in certain contexts were tested in an auditory animacy decision task, along with lexical access. Reading comprehension and the control variables vocabulary size, decoding skill, and mental processing speed were tested by means of standardized tasks. Mixed-effects modeling was used to obtain individual priming scores and to study the effect of individual differences in the various predictor variables on the reading scores. Semantic priming was observed for the coordinate pairs but not the context-dependently related pairs, and neither context-independent priming nor lexical access predicted reading comprehension. Only vocabulary size significantly contributed to the reading scores, emphasizing the importance of the number of words known for reading comprehension. Finally, the results show that the monolingual and bilingual children perform similarly on all measures, suggesting that in the current Dutch context, language status may not be highly predictive of vocabulary knowledge and reading comprehension skill.


2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


2014 ◽  
Vol 35 (3) ◽  
pp. 137-143 ◽  
Author(s):  
Lindsay M. Niccolai ◽  
Thomas Holtgraves

This research examined differences in the perception of emotion words as a function of individual differences in subclinical levels of depression and anxiety. Participants completed measures of depression and anxiety and performed a lexical decision task for words varying in affective valence (but equated for arousal) that were presented briefly to the right or left visual field. Participants with a lower level of depression demonstrated hemispheric asymmetry with a bias toward words presented to the left hemisphere, but participants with a higher level of depression displayed no hemispheric differences. Participants with a lower level of depression also demonstrated a bias toward positive words, a pattern that did not occur for participants with a higher level of depression. A similar pattern occurred for anxiety. Overall, this study demonstrates how variability in levels of depression and anxiety can influence the perception of emotion words, with patterns that are consistent with past research.


1997 ◽  
Author(s):  
Paul D. Allopenna ◽  
James S. Magnuson ◽  
Michael K. Tanenhaus

Sign in / Sign up

Export Citation Format

Share Document