Psychometric function for speech-in-noise tests accounts for word-recognition deficits in older listeners

2021 ◽  
Vol 149 (4) ◽  
pp. 2337-2352
Author(s):  
Bernhard Ross ◽  
Simon Dobri ◽  
Annette Schumann
2019 ◽  
Vol 63 (2) ◽  
pp. 381-403 ◽  
Author(s):  
Giovanna Morini ◽  
Rochelle S. Newman

The question of whether bilingualism leads to advantages or disadvantages in linguistic abilities has been debated for many years. It is unclear whether growing up with one versus two languages is related to variations in the ability to process speech in the presence of background noise. We present findings from a word recognition and a word learning task with monolingual and bilingual adults. Bilinguals appear to be less accurate than monolinguals at identifying familiar words in the presence of white noise. However, the bilingual “disadvantage” identified during word recognition is not present when listeners were asked to acquire novel word-object relations that were trained either in noise or in quiet. This work suggests that linguistic experience and the demands associated with the type of task both play a role in the ability for listeners to process speech in noise.


2019 ◽  
Vol 145 (4) ◽  
pp. EL284-EL290 ◽  
Author(s):  
Kathryn A. Sobon ◽  
Nardine M. Taleb ◽  
Emily Buss ◽  
John H. Grose ◽  
Lauren Calandruccio

2014 ◽  
Vol 35 (2) ◽  
pp. 236-245 ◽  
Author(s):  
Lu-Feng Shi ◽  
Nancy A. Zaki

2014 ◽  
Vol 57 (1) ◽  
pp. 327-337 ◽  
Author(s):  
Mallory Baker ◽  
Emily Buss ◽  
Adam Jacks ◽  
Crystal Taylor ◽  
Lori J. Leibold

Purpose This study evaluated the degree to which children benefit from the acoustic modifications made by talkers when they produce speech in noise. Method A repeated measures design compared the speech perception performance of children (5–11 years) and adults in a 2-talker masker. Target speech was produced in a 2-talker background or in quiet. In Experiment 1, recognition with the 2 target sets was assessed using an adaptive spondee identification procedure. In Experiment 2, the benefit of speech produced in a 2-talker background was assessed using an open-set, monosyllabic word recognition task at a fixed signal-to-noise ratio (SNR). Results Children performed more poorly than adults, regardless of whether the target speech was produced in quiet or in a 2-talker background. A small improvement in the SNR required to identify spondees was observed for both children and adults using speech produced in a 2-talker background (Experiment 1). Similarly, average open-set word recognition scores were 11 percentage points higher for both age groups using speech produced in a 2-talker background compared with quiet (Experiment 2). Conclusion The results indicate that children can use the acoustic modifications of speech produced in a 2-talker background to improve masked speech perception, as previously demonstrated for adults.


2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


Author(s):  
Mohammad Ebrahim Mahdavi ◽  
Atefeh Rabiei

Background and Aim: Evaluation of word recognition score requires multiple lists that must be similar in terms of difficulty level. There is currently no such word lists for the Persian language. The aim of this study was to construct several lists of Persian monosyllabic words with psychometric homogeneity. Methods: The most common monosyllabic words were collected from a book of Persian word frequency. The selected monosyllabic Consonant-Vowel-Consonant (CVC) words were presented randomly to 30 normal hearing participants with the age range of 18 to 25 years. The presentation level was from 0 to 40 dB in 8 dB increments. The characteristics of psychometric function were determined for all words using the logistic regression. Results: The Persian CVC monosyllabic words have different difficulty levels with threshold varying from 2.8 to 37.2 dB HL and the slope from 2.3 to 16.4 %/dB. Conclusion: The final result of the present study is three full lists of monosyllabic words with CVC syllabic structure that have the same mean threshold and slope of psychometric function. The 25-word half-lists of each full list are similar in terms of psychometric characteristics. Keywords: Psychometric function; Persian monosyllabic words; speech audiometry


2020 ◽  
Vol 29 (4) ◽  
pp. 916-929
Author(s):  
Yihsin Tai ◽  
Fatima T. Husain

Purpose Difficulties in speech-in-noise understanding are often reported in individuals with tinnitus. Building on our previous findings that speech-in-noise performance is correlated with subjective loudness of tinnitus, this study aimed to investigate the effect of tinnitus pitch on consonant recognition in noise. Method Pure-tone audiometry and the Quick Speech-in-Noise Test were conducted on 66 participants categorized into four groups by their hearing sensitivity and self-report of tinnitus. Consonant recognition scores at various frequency ranges were obtained at the 5 dB SNR condition of the Quick Speech-in-Noise Test. Participants with tinnitus also completed a tinnitus pitch-matching procedure. Correlation analyses were conducted between tinnitus pitch and the frequency of the worst consonant recognition, and the error rates based on word and sentence position were compared. Results Regardless of hearing sensitivity, tinnitus pitch did not correlate with the frequency of the worst consonant recognition. Sentence-initial word recognition was affected by hearing loss, whereas sentence-final word recognition was not affected by hearing loss or tinnitus. In contrast to individuals with normal hearing, participants with hearing loss varied in full-sentence recognition, with those reporting tinnitus exhibiting significantly higher error rates. Conclusions The findings suggest that the effect of tinnitus on consonant recognition in noise may involve higher level functions more than perceptual characteristics of tinnitus. Furthermore, for individuals with speech-in-noise concerns, clinical evaluation should address both hearing sensitivity and the presence of tinnitus. Future speech-in-noise studies should incorporate cognitive tests and, possibly, brain imaging to parse out the contribution of cognitive factors, such as cognitive control, in speech-in-noise in tinnitus.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Mohsin Ahmed Shaikh ◽  
Lisa Fox-Thomas ◽  
Denise Tucker

This study aimed to investigate differences between ears in performance on a monaural word recognition in noise test among individuals across a broad range of ages assessed for (C)APD. Word recognition scores in quiet and in speech noise were collected retrospectively from the medical files of 107 individuals between the ages of 7 and 30 years who were diagnosed with (C)APD. No ear advantage was found on the word recognition in noise task in groups less than ten years. Performance in both ears was equally poor. Right ear performance improved across age groups, with scores of individuals above age 10 years falling within the normal range. In contrast, left ear performance remained essentially stable and in the impaired range across all age groups. Findings indicate poor left hemispheric dominance for speech perception in noise in children below the age of 10 years with (C)APD. However, a right ear advantage on this monaural speech in noise task was observed for individuals 10 years and older.


2008 ◽  
Vol 19 (06) ◽  
pp. 507-518 ◽  
Author(s):  
Rachel McArdle ◽  
Richard H. Wilson

Purpose: To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. Research Design: The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). Data Collection and Analysis: The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Results: Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. Conclusions: The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.


Sign in / Sign up

Export Citation Format

Share Document