Talker Adaptation and Lexical Difficulty Impact Word Recognition in Adults with Cochlear Implants

2021 ◽  
pp. 1-10
Author(s):  
Terrin N. Tamati ◽  
Aaron C. Moberly

<b><i>Introduction:</i></b> Talker-specific adaptation facilitates speech recognition in normal-hearing listeners. This study examined talker adaptation in adult cochlear implant (CI) users. Three hypotheses were tested: (1) high-performing adult CI users show improved word recognition following exposure to a talker (“talker adaptation”), particularly for lexically hard words, (2) individual performance is determined by auditory sensitivity and neurocognitive skills, and (3) individual performance relates to real-world functioning. <b><i>Methods:</i></b> Fifteen high-performing, post-lingually deaf adult CI users completed a word recognition task consisting of 6 single-talker blocks (3 female/3 male native English speakers); words were lexically “easy” and “hard.” Recognition accuracy was assessed “early” and “late” (first vs. last 10 trials); adaptation was assessed as the difference between late and early accuracy. Participants also completed measures of spectral-temporal processing and neurocognitive skills, as well as real-world measures of multiple-talker sentence recognition and quality of life (QoL). <b><i>Results:</i></b> CI users showed limited talker adaptation overall, but performance improved for lexically hard words. Stronger spectral-temporal processing and neurocognitive skills were weakly to moderately associated with more accurate word recognition and greater talker adaptation for hard words. Finally, word recognition accuracy for hard words was moderately related to multiple-talker sentence recognition and QoL. <b><i>Conclusion:</i></b> Findings demonstrate a limited talker adaptation benefit for recognition of hard words in adult CI users. Both auditory sensitivity and neurocognitive skills contribute to performance, suggesting additional benefit from adaptation for individuals with stronger skills. Finally, processing differences related to talker adaptation and lexical difficulty may be relevant to real-world functioning.

2020 ◽  
Author(s):  
Merel C. Wolf ◽  
antje meyer ◽  
Caroline F Rowland ◽  
Florian Hintz

Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Merel C. Wolf ◽  
Antje S. Meyer ◽  
Caroline F. Rowland ◽  
Florian Hintz

Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. An important question is whether input modality has effects on word recognition accuracy. In the present study, we investigated whether input modality (spoken, written, or bimodal) affected word recognition accuracy and whether such a modality effect interacted with word difficulty. Moreover, we tested whether the participants’ reading experience interacted with word difficulty and whether this interaction was influenced by modality. We re-analyzed data from 48 Dutch university students that were collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition, they completed a receptive vocabulary and an author recognition test to measure their exposure to literary texts. Our re-analyses showed that word difficulty interacted with reading experience in that frequent readers (i.e., with more exposure to written texts) were more accurate in recognizing difficult words than individuals who read less frequently. However, there was no evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or reading experience. Thus, in our study, input modality did not influence word recognition accuracy. We discuss the implications of this finding and describe possibilities for future research involving other groups of participants and/or different languages.


2018 ◽  
Vol 35 (5) ◽  
pp. 527-539 ◽  
Author(s):  
Kirk N. Olsen ◽  
William Forde Thompson ◽  
Iain Giblin

Death Metal music with violent themes is characterized by vocalizations with unnaturally low fundamental frequencies and high levels of distortion and roughness. These attributes decrease the signal to noise ratio, rendering linguistic content difficult to understand and leaving the impression of growling, screaming, or other non-linguistic vocalizations associated with aggression and fear. Here, we compared the ability of fans and non-fans of Death Metal to accurately perceive sung words extracted from Death Metal music. We also examined whether music training confers an additional benefit to intelligibility. In a 2 × 2 between-subjects factorial design (fans/non-fans, musicians/nonmusicians), four groups of participants (n = 16 per group) were presented with 24 sung words (one per trial), extracted from the popular American Death Metal band Cannibal Corpse. On each trial, participants completed a four-alternative forced-choice word recognition task. Intelligibility (word recognition accuracy) was above chance for all groups and was significantly enhanced for fans (65.88%) relative to non-fans (51.04%). In the fan group, intelligibility between musicians and nonmusicians was statistically similar. In the non-fan group, intelligibility was significantly greater for musicians relative to nonmusicians. Results are discussed in the context of perceptual learning and the benefits of expertise for decoding linguistic information in sub-optimum acoustic conditions.


2021 ◽  
Author(s):  
Ava Kiai ◽  
Lucia Melloni

Statistical learning (SL) allows individuals to rapidly detect regularities in the sensory environment. We replicated previous findings showing that adult participants become sensitive to the implicit structure in a continuous speech stream of repeating tri-syllabic pseudowords within minutes, as measured by standard tests in the SL literature: a target detection task and a 2AFC word recognition task. Consistent with previous findings, we found only a weak correlation between these two measures of learning, leading us to question whether there is overlap between the information captured by these two tasks. Representational similarity analysis on reaction times measured during the target detection task revealed that reaction time data reflect sensitivity to transitional probability, triplet position, word grouping, and duplet pairings of syllables. However, individual performance on the word recognition task was not predicted by similarity measures derived for any of these four features. We conclude that online detection tasks provide richer and multi-faceted information about the SL process, as compared with 2AFC recognition tasks, and may be preferable for gaining insight into the dynamic aspects of SL.


2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


1999 ◽  
Vol 110 (8) ◽  
pp. 1378-1387 ◽  
Author(s):  
P Walla ◽  
W Endl ◽  
G Lindinger ◽  
W Lalouschek ◽  
L Deecke ◽  
...  

2005 ◽  
Vol 36 (3) ◽  
pp. 219-229 ◽  
Author(s):  
Peggy Nelson ◽  
Kathryn Kohnert ◽  
Sabina Sabur ◽  
Daniel Shaw

Purpose: Two studies were conducted to investigate the effects of classroom noise on attention and speech perception in native Spanish-speaking second graders learning English as their second language (L2) as compared to English-only-speaking (EO) peers. Method: Study 1 measured children’s on-task behavior during instructional activities with and without soundfield amplification. Study 2 measured the effects of noise (+10 dB signal-to-noise ratio) using an experimental English word recognition task. Results: Findings from Study 1 revealed no significant condition (pre/postamplification) or group differences in observations in on-task performance. Main findings from Study 2 were that word recognition performance declined significantly for both L2 and EO groups in the noise condition; however, the impact was disproportionately greater for the L2 group. Clinical Implications: Children learning in their L2 appear to be at a distinct disadvantage when listening in rooms with typical noise and reverberation. Speech-language pathologists and audiologists should collaborate to inform teachers, help reduce classroom noise, increase signal levels, and improve access to spoken language for L2 learners.


2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2021 ◽  
pp. 1-10
Author(s):  
Ward R. Drennan

<b><i>Introduction:</i></b> Normal-hearing people often have complaints about the ability to recognize speech in noise. Such disabilities are not typically assessed with conventional audiometry. Suprathreshold temporal deficits might contribute to reduced word recognition in noise as well as reduced temporally based binaural release of masking for speech. Extended high-frequency audibility (&#x3e;8 kHz) has also been shown to contribute to speech perception in noise. The primary aim of this study was to compare conventional audiometric measures with measures that could reveal subclinical deficits. <b><i>Methods:</i></b> Conventional and extended high-frequency audiometry was done with 119 normal-hearing people ranging in age from 18 to 72. The ability to recognize words in noise was evaluated with and without differences in temporally based spatial cues. A low-uncertainty, closed-set word recognition task was used to limit cognitive influences. <b><i>Results:</i></b> In normal-hearing listeners, word recognition in noise ability decreases significantly with increasing pure-tone average (PTA). On average, signal-to-noise ratios worsened by 5.7 and 6.0 dB over the normal range, for the diotic and dichotic conditions, respectively. When controlling for age, a significant relationship remained in the diotic condition. Measurement error was estimated at 1.4 and 1.6 dB for the diotic and dichotic conditions, respectively. Controlling for both PTA and age, EHF-PTAs showed significant partial correlations with SNR50 in both conditions (<i>ρ</i> = 0.30 and 0.23). Temporally based binaural release of masking worsened with age by 1.94 dB from 18 to 72 years old but showed no significant relationship with either PTA. <b><i>Conclusions:</i></b> All three assessments in this study demonstrated hearing problems independently of those observed in conventional audiometry. Considerable degradations in word recognition in noise abilities were observed as PTAs increased within the normal range. The use of an efficient words-in-noise measure might help identify functional hearing problems for individuals that are traditionally normal hearing. Extended audiometry provided additional predictive power for word recognition in noise independent of both the PTA and age. Temporally based binaural release of masking for word recognition decreased with age independent of PTAs within the normal range, indicating multiple mechanisms of age-related decline with potential clinical impact.


Sign in / Sign up

Export Citation Format

Share Document