Journal of the American Academy of Audiology
Latest Publications


TOTAL DOCUMENTS

1792
(FIVE YEARS 312)

H-INDEX

50
(FIVE YEARS 4)

Published By American Academy Of Audiology

2157-3107, 1050-0545

Author(s):  
Mariana Lopes Martins ◽  
Melyssa Kellyane Cavalcanti Galdino ◽  
Bernardino Fernández Calvo ◽  
Fátima Cristina Alves Branco-Barreiro ◽  
Thiago Monteiro Paiva Fernandes ◽  
...  

Background: Psychiatric conditions are common in individuals with tinnitus, so the ways individuals cope with such conditions and personality can influence the characteristics of tinnitus. Purpose: The study aims to investigate the direct and indirect effects of resilience, personality traits and psychiatric symptoms on the tinnitus perception. Research Design: Descriptive, cross-sectional, and observational field study involving quantitative results. Study Sample: Thirty-seven individuals who sought the tinnitus care service (mean age = 44.6 years; SD = 11.7 years), with chronic tinnitus for more than six months. Data Collection and Analysis: The specific anamnesis of tinnitus, adult self-report, resilience scale, big five inventory, tinnitus handicap inventory (M=45.0; SD= 24.1) and visual analog scale (M=6.4; SD= 2.7) were used. Psychoacoustic measurements (loudness: M=25.4; SD= 12.8) of tinnitus were performed to characterize the condition in terms of pitch and loudness. The study analyzed the relationship between tinnitus (annoyance, severity, and loudness), psychiatric symptoms, personality, and resilience. Results: Resilience did not influence tinnitus severity (BCa: -1.12 to 0.51), annoyance (BCa: -0.10 to 0.11), or loudness (BCa: -0.44 to 0.28) when mediated by anxiety and depression. Additionally, there was only a direct effect of resilience for annoyance (t=-2.14, p=0.03; BCa: -0.10 to 0.11). There was no direct influence of anxiety and depression on the tinnitus severity (b = 0.53, p> 0.05), annoyance (b = -0.01, p> 0.05) or loudness (b = 0.11, p> 0.05). However, there was an association of personality traits (neuroticism) with the tinnitus severity (b = 1.16, 95% CI: 0.15-2.17; t = 2.53, p = 0.02) and annoyance (b = 0.12, 95% CI: 0.003-0.24; t = 2.09, p = 0.04). Conclusions: Resilience and psychiatric symptoms did not have a direct or indirect influence on the tinnitus annoyance, severity, or loudness, with only a direct association of resilience and annoyance, and neuroticism trait with the tinnitus annoyance and severity. Our results suggest that it is essential patients with high neuroticism be conducted to develop personalized treatment.


Author(s):  
Jace Wolfe ◽  
Mila Duke ◽  
Sharon Miller ◽  
Erin Schafer ◽  
Christine Jones ◽  
...  

Background: For children with hearing loss, the primary goal of hearing aids is to provide improved access to the auditory environment within the limits of hearing aid technology and the child’s auditory abilities. However, there are limited data examining aided speech recognition at very low (40 dBA) and low (50 dBA) presentation levels. Purpose: Due to the paucity of studies exploring aided speech recognition at low presentation levels for children with hearing loss, the present study aimed to 1) compare aided speech recognition at different presentation levels between groups of children with normal hearing and hearing loss, 2) explore the effects of aided pure tone average (PTA) and aided Speech Intelligibility Index (SII) on aided speech recognition at low presentation levels for children with hearing loss ranging in degree from mild to severe, and 3) evaluate the effect of increasing low-level gain on aided speech recognition of children with hearing loss. Research Design: In phase 1 of this study, a two-group, repeated-measures design was used to evaluate differences in speech recognition. In phase 2 of this study, a single-group, repeated-measures design was used to evaluate the potential benefit of additional low-level hearing aid gain for low-level aided speech recognition of children with hearing loss. Study Sample: The first phase of the study included 27 school-age children with mild to severe sensorineural hearing loss and 12 school-age children with normal hearing. The second phase included eight children with mild to moderate sensorineural hearing loss. Intervention: Prior to the study, children with hearing loss were fitted binaurally with digital hearing aids. Children in the second phase were fitted binaurally with digital study hearing aids and completed a trial period with two different gain settings: 1) gain required to match hearing aid output to prescriptive targets (i.e., primary program), and 2) a 6-dB increase in overall gain for low-level inputs relative to the primary program. In both phases of this study, real-ear verification measures were completed to ensure the hearing aid output matched prescriptive targets. Data Collection and Analysis: Phase 1 included monosyllabic word recognition and syllable-final plural recognition at three presentation levels (40, 50, and 60 dBA). Phase 2 compared speech recognition performance for the same test measures and presentation levels with two differing gain prescriptions. Results and Conclusions: In phase 1 of the study, aided speech recognition was significantly poorer in children with hearing loss at all presentation levels. Higher aided SII in the better ear (55 dB SPL input) was associated with higher CNC word recognition at a 40 dBA presentation level. In phase 2, increasing the hearing aid gain for low-level inputs provided a significant improvement in syllable-final plural recognition at very low-level inputs and resulted in a non-significant trend toward better monosyllabic word recognition at very low presentation levels. Additional research is needed to document the speech recognition difficulties children with hearing aids may experience with low-level speech in the real world as well as the potential benefit or detriment of providing additional low-level hearing aid gain


Author(s):  
Marc Brennan ◽  
Ryan Mccreery ◽  
John Massey

Background: Adults and children with sensorineural hearing loss (SNHL) have trouble understanding speech in rooms with reverberation when using hearing aid amplification. While the use of amplitude compression signal processing in hearing aids may contribute to this difficulty, there is conflicting evidence on the effects of amplitude compression settings on speech recognition. Less clear is the effect of a fast release time for adults and children with SNHL when using compression ratios derived from a prescriptive procedure. Purpose: To determine whether release time impacts speech recognition in reverberation for children and adults with SNHL and to determine if these effects of release time and reverberation can be predicted using indices of audibility or temporal and spectral distortion. Research Design: A quasi-experimental cohort study. Participants used a hearing aid simulator set to the Desired Sensation Level algorithm m[i/o] for three different amplitude compression release times. Reverberation was simulated using three different reverberation times. Participants: Participants were 20 children and 16 adults with SNHL. Data Collection and Analyses: Participants were seated in a sound-attenuating booth and then nonsense syllable recognition was measured. Predictions of speech recognition were made using indices of audibility, temporal distortion, and spectral distortion and the effects of release time and reverberation were analyzed using linear mixed models. Results: While nonsense syllable recognition decreased in reverberation; release time did not significantly affect nonsense syllable recognition. Participants with lower audibility were more susceptible to the negative effect of reverberation on nonsense syllable recognition. Conclusions: We have extended previous work on the effects of reverberation on aided speech recognition to children with SNHL. Variations in release time did not impact the understanding of speech. An index of audibility best predicted nonsense syllable recognition in reverberation and, clinically, these results suggest that patients with less audibility are more susceptible to nonsense syllable recognition in reverberation.


Author(s):  
Susan Gordon-Hickey ◽  
Melinda Freyaldenhoven Bryan

ABSTRACT Background: The acceptable noise level (ANL) is the maximum level of background noise that an individual is willing to accept while listening to speech. The type of background noise does not affect ANL results except for music (Gordon-Hickey & Moore, 2007; Nàbĕlek et al, 1991). Purpose: The purpose of this study was to determine if ANL differed due to music genre or music genre preference. Research Design: A repeated-measures experimental design was employed. Study Sample: Thirty-three young adults with normal hearing served as listeners. Data Collection and Analysis: MCL and BNL were measured to twelve-talker babble and 5 music samples from different genres: blues, classical, country, jazz, and rock. Additionally, music preference was evaluated via rank ordering of genre and by completion of the STOMP questionnaire. Results: Results indicated that ANL for music differed based on music genre; however, the difference was unrelated to music genre preference. Also, those with low ANLs tended to prefer the Intense and Rebellious music-preference dimension compared to those with high ANLs. Conclusions: For instrumental music, ANL was lower for blues and rock music compared to classical, country, and jazz. The differences identified were not related to music genre preference; however, this finding may be related to the music preference dimension of Intense and Rebellious music. Future work should evaluate the psychological variables that make up music-preference dimension to determine if these relate to our ANL.


2021 ◽  
Vol 32 (08) ◽  
pp. 521-527
Author(s):  
Yang-Soo Yoon ◽  
George Whitaker ◽  
Yune S. Lee

Abstract Background Cochlear implant technology allows for acoustic and electric stimulations to be combined across ears (bimodal) and within the same ear (electric acoustic stimulation [EAS]). Mechanisms used to integrate speech acoustics may be different between the bimodal and EAS hearing, and the configurations of hearing loss might be an important factor for the integration. Thus, differentiating the effects of different configurations of hearing loss on bimodal or EAS benefit in speech perception (differences in performance with combined acoustic and electric stimulations from a better stimulation alone) is important. Purpose Using acoustic simulation, we determined how consonant recognition was affected by different configurations of hearing loss in bimodal and EAS hearing. Research Design A mixed design was used with one between-subject variable (simulated bimodal group vs. simulated EAS group) and one within-subject variable (acoustic stimulation alone, electric stimulation alone, and combined acoustic and electric stimulations). Study Sample Twenty adult subjects (10 for each group) with normal hearing were recruited. Data Collection and Analysis Consonant perception was unilaterally or bilaterally measured in quiet. For the acoustic stimulation, four different simulations of hearing loss were created by band-pass filtering consonants with a fixed lower cutoff frequency of 100 Hz and each of the four upper cutoff frequencies of 250, 500, 750, and 1,000 Hz. For the electric stimulation, an eight-channel noise vocoder was used to generate a typical spectral mismatch by using fixed input (200–7,000 Hz) and output (1,000–7,000 Hz) frequency ranges. The effects of simulated hearing loss on consonant recognition were compared between the two groups. Results Significant bimodal and EAS benefits occurred regardless of the configurations of hearing loss and hearing technology (bimodal vs. EAS). Place information was better transmitted in EAS hearing than in bimodal hearing. Conclusion These results suggest that configurations of hearing loss are not a significant factor for integrating consonant information between acoustic and electric stimulations. The results also suggest that mechanisms used to integrate consonant information may be similar between bimodal and EAS hearing.


2021 ◽  
Vol 32 (08) ◽  
pp. 547-554
Author(s):  
Soha N. Garadat ◽  
Ana'am Alkharabsheh ◽  
Nihad A. Almasri ◽  
Abdulrahman Hagr

Abstract Background Speech audiometry materials are widely available in many different languages. However, there are no known standardized materials for the assessment of speech recognition in Arabic-speaking children. Purpose The aim of the study was to develop and validate phonetically balanced and psychometrically equivalent monosyllabic word recognition lists for children through a picture identification task. Research Design A prospective repeated-measure design was used. Monosyllabic words were chosen from children's storybooks and were evaluated for familiarity. The selected words were then divided into four phonetically balanced word lists. The final lists were evaluated for homogeneity and equivalency. Study Sample Ten adults and 32 children with normal hearing sensitivity were recruited. Data Collection and Analyses Lists were presented to adult subjects in 5 dB increment from 0 to 60 dB hearing level. Individual data were then fitted using a sigmoid function from which the 50% threshold, slopes at the 50% points, and slopes at the 20 to 80% points were derived to determine list psychometric properties. Lists were next presented to children in two separate sessions to assess their equivalency, validity, and reliability. Data were subjected to a mixed design analysis of variance. Results No statistically significant difference was found among the word lists. Conclusion This study provided an evidence that the monosyllabic word lists had comparable psychometric characteristics and reliability. This supports that the constructed speech corpus is a valid tool that can be used in assessing speech recognition in Arabic-speaking children.


2021 ◽  
Vol 32 (08) ◽  
pp. 528-536
Author(s):  
Jessica H. Lewis ◽  
Irina Castellanos ◽  
Aaron C. Moberly

Abstract Background Recent models theorize that neurocognitive resources are deployed differently during speech recognition depending on task demands, such as the severity of degradation of the signal or modality (auditory vs. audiovisual [AV]). This concept is particularly relevant to the adult cochlear implant (CI) population, considering the large amount of variability among CI users in their spectro-temporal processing abilities. However, disentangling the effects of individual differences in spectro-temporal processing and neurocognitive skills on speech recognition in clinical populations of adult CI users is challenging. Thus, this study investigated the relationship between neurocognitive functions and recognition of spectrally degraded speech in a group of young adult normal-hearing (NH) listeners. Purpose The aim of this study was to manipulate the degree of spectral degradation and modality of speech presented to young adult NH listeners to determine whether deployment of neurocognitive skills would be affected. Research Design Correlational study design. Study Sample Twenty-one NH college students. Data Collection and Analysis Participants listened to sentences in three spectral-degradation conditions: no degradation (clear sentences); moderate degradation (8-channel noise-vocoded); and high degradation (4-channel noise-vocoded). Thirty sentences were presented in an auditory-only (A-only) modality and an AV fashion. Visual assessments from The National Institute of Health Toolbox Cognitive Battery were completed to evaluate working memory, inhibition-concentration, cognitive flexibility, and processing speed. Analyses of variance compared speech recognition performance among spectral degradation condition and modality. Bivariate correlation analyses were performed among speech recognition performance and the neurocognitive skills in the various test conditions. Results Main effects on sentence recognition were found for degree of degradation (p = < 0.001) and modality (p = < 0.001). Inhibition-concentration skills moderately correlated (r = 0.45, p = 0.02) with recognition scores for sentences that were moderately degraded in the A-only condition. No correlations were found among neurocognitive scores and AV speech recognition scores. Conclusions Inhibition-concentration skills are deployed differentially during sentence recognition, depending on the level of signal degradation. Additional studies will be required to study these relations in actual clinical populations such as adult CI users.


2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.


Sign in / Sign up

Export Citation Format

Share Document