Closed-Set Effects in Consonant Confusion Patterns

1989 ◽  
Vol 32 (4) ◽  
pp. 944-948 ◽  
Author(s):  
Theodore S. Bell ◽  
Donald D. Dirks ◽  
Gail E. Kincaid

Invariance of error patterns in confusion matrices of varying dimensions were examined. Normal-hearing young adults were presented closed-set arrangements of digitized syllable tokens, spoken by 1 male and 1 female talker, and selected from a set of 14 consonants (stops and fricatives). Each consonant was paired with the vowel /a/ in a vowel-consonant format and presented at three intensity levels. Patterns of errors among voiceless stops and among voiced fricatives were dependent on the set of alternatives. Voiceless fricatives and voiced stops were not significantly affected by the number of response alternatives. Speaker differences, individual differences among listeners, and implications relating to the generalization of confusion data collected in small closed-set arrangements arc discussed.

1997 ◽  
Vol 28 (1) ◽  
pp. 77-85 ◽  
Author(s):  
Carole E. Johnson ◽  
Ramona L. Stein ◽  
Alicia Broadway ◽  
Tamatha S. Markwalter

The purpose of this study was to assess the consonant and vowel identification abilities of 12 children with minimal high-frequency hearing loss, 12 children with normal hearing, and 12 young adults with normal hearing using nonsense syllables recorded in a classroom with reverberation time of 0.7 s in two conditions of: (1) quiet and (2) noise (+13 dB S/N against a multi-talker babble). The young adults achieved significantly higher mean consonant and vowel identification scores than both groups of children. The children with normal hearing had significantly higher mean consonant identification scores in quiet than the children with minimal high-frequency hearing loss, but the groups performances did not differ in noise. Further, the two groups of children did not differ in vowel identification performance. Listeners’ responses to consonant stimuli were converted to confusion matrices and submitted to a sequential information analysis (SINFA, Wang & Bilger, 1973). The SINFA determined that the amount of information transmitted, both overall and for individual features, differed as a function of listener group ad listening condition.


2019 ◽  
Vol 30 (02) ◽  
pp. 153-161 ◽  
Author(s):  
Peter Torre ◽  
Mark B. Reed

AbstractMost young adults report using personal audio systems (PAS) with earphones as part of their daily activities. PAS exposure is intermittent and research examining the levels these young adults are listening to is increasing. On average, preferred listening levels are below what would be considered at risk in an occupational setting.The purpose of this study was to evaluate how two questions predicted preferred listening level in young adults with normal hearing; specifically, whether these young adults could identify if they listen at a high level or not.One hundred and sixty young adults (111 women, 49 men) with normal hearing completed a questionnaire that had questions about PAS listening habits and then had preferred listening level assessed using a probe microphone system while listening to 1 hour of music through earphones.Otoscopy, tympanometry, and pure-tone thresholds were completed in a randomly determined test ear. As part of the Risk Factors Survey, two closed-set questions were completed. First, “For a typical day, what is the most common volume used during this day?” with the following response options “Low,” “Medium/Comfortable,” “Loud,” or “Very Loud.” And second, “Do you listen to your personal music system at a volume where you…” with the following response options “Easily hear people,” “Have a little trouble hearing people,” “Have a lot of trouble hearing people,” or “Cannot hear people.” Using a probe microphone, chosen listening level (A-weighted, diffuse-field correction and a conversion to free-field equivalent [L DFeq]) was calculated over 1 hour while the participant listened to music with earphones. Sensitivity and specificity were determined to see how well young adults could identify themselves as listening at a high level (>85 dBA) or not. Linear regression analyses were performed to determine the amount of variance explained by the two survey questions as predictors of measured L DFeq.Almost half of the participants reported a longest single use of a PAS as <1 hour daily and more than half reported listening at a medium/comfortable volume and had a little trouble hearing people. Mean L DFeq was 72.5 dBA, with young adult men having a significantly higher mean L DFeq (76.5 dBA) compared with young adult women (70.8 dBA). Sensitivity was 88.9% and specificity was 70.6% for the question asking about volume on a typical day. For the question asking about being able to hear other people while listening to music sensitivity was 83.3% and specificity was 82.5%. Two variables, listening volume on a typical day and sex, accounted for 28.4% of the variability associated with L DFeq; the answer to the question asking about being able to hear others and sex accounted for 22.8% of the variability associated with L DFeq.About 11% of young adults in the present study listen to a PAS with earphones at a high level (>85 dBA) while in a quiet background. The participants who do report listening at a high level, however, do well at self-reporting this risk behavior in survey questions.


2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


1978 ◽  
Vol 43 (2) ◽  
pp. 200-207 ◽  
Author(s):  
Grace Haugland Bargstadt ◽  
John M. Hutchinson ◽  
Michael A. Nerbonne

This investigation provides a preliminary evaluation of the use of the video articulator, a phonemic recognition device for the hearing impaired. The subjects were five young adults with normal hearing and vision (corrected) who were matched with respect to age, sex, dialect, education, and phonological sophistication. Each subject received 150 min of programmed training to learn the video configurations of the eight English fricatives both in isolation and consonant-vowel contexts. Following the training period, the subjects were given a test to determine adequacy of learning and retention of the video configurations for the training stimuli, in the absence of auditory cues. The subjects' responses were analyzed using a common covariance measure. The results demonstrated generally low transmission values for consonants in isolation. Moreover, identification of consonants in context was less accurate. The subjects, as a group, had greater difficulty in recognizing the productions of other subjects when compared with recognition of their own utterances. The clinical implications of these findings are discussed.


2008 ◽  
Vol 105 (47) ◽  
pp. 18555-18560 ◽  
Author(s):  
G. S. Wig ◽  
S. T. Grafton ◽  
K. E. Demos ◽  
G. L. Wolford ◽  
S. E. Petersen ◽  
...  

2021 ◽  
pp. 1-10
Author(s):  
Ward R. Drennan

<b><i>Introduction:</i></b> Normal-hearing people often have complaints about the ability to recognize speech in noise. Such disabilities are not typically assessed with conventional audiometry. Suprathreshold temporal deficits might contribute to reduced word recognition in noise as well as reduced temporally based binaural release of masking for speech. Extended high-frequency audibility (&#x3e;8 kHz) has also been shown to contribute to speech perception in noise. The primary aim of this study was to compare conventional audiometric measures with measures that could reveal subclinical deficits. <b><i>Methods:</i></b> Conventional and extended high-frequency audiometry was done with 119 normal-hearing people ranging in age from 18 to 72. The ability to recognize words in noise was evaluated with and without differences in temporally based spatial cues. A low-uncertainty, closed-set word recognition task was used to limit cognitive influences. <b><i>Results:</i></b> In normal-hearing listeners, word recognition in noise ability decreases significantly with increasing pure-tone average (PTA). On average, signal-to-noise ratios worsened by 5.7 and 6.0 dB over the normal range, for the diotic and dichotic conditions, respectively. When controlling for age, a significant relationship remained in the diotic condition. Measurement error was estimated at 1.4 and 1.6 dB for the diotic and dichotic conditions, respectively. Controlling for both PTA and age, EHF-PTAs showed significant partial correlations with SNR50 in both conditions (<i>ρ</i> = 0.30 and 0.23). Temporally based binaural release of masking worsened with age by 1.94 dB from 18 to 72 years old but showed no significant relationship with either PTA. <b><i>Conclusions:</i></b> All three assessments in this study demonstrated hearing problems independently of those observed in conventional audiometry. Considerable degradations in word recognition in noise abilities were observed as PTAs increased within the normal range. The use of an efficient words-in-noise measure might help identify functional hearing problems for individuals that are traditionally normal hearing. Extended audiometry provided additional predictive power for word recognition in noise independent of both the PTA and age. Temporally based binaural release of masking for word recognition decreased with age independent of PTAs within the normal range, indicating multiple mechanisms of age-related decline with potential clinical impact.


2020 ◽  
Vol 24 ◽  
pp. 233121652093054 ◽  
Author(s):  
Tali Rotman ◽  
Limor Lavie ◽  
Karen Banai

Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.


Sign in / Sign up

Export Citation Format

Share Document