scholarly journals The use of construct validity to establish predictive relationships between hearing aid performance and speech understanding

1982 ◽  
Vol 71 (S1) ◽  
pp. S74-S74
Author(s):  
Joseph Smaldino
2015 ◽  
Vol 26 (10) ◽  
pp. 872-884 ◽  
Author(s):  
Yu-Hsiang Wu ◽  
Elizabeth Stangl ◽  
Xuyang Zhang ◽  
Ruth A. Bentler

Background: Ecological momentary assessment (EMA) is a methodology involving repeated assessments/surveys to collect data describing respondents’ current or very recent experiences and related contexts in their natural environments. The use of EMA in audiology research is growing. Purpose: This study examined the construct validity (i.e., the degree to which a measurement reflects what it is intended to measure) of EMA in terms of measuring speech understanding and related listening context. Experiment 1 investigated the extent to which individuals can accurately report their speech recognition performance and characterize the listening context in controlled environments. Experiment 2 investigated whether the data aggregated across multiple EMA surveys conducted in uncontrolled, real-world environments would reveal a valid pattern that was consistent with the established relationships between speech understanding, hearing aid use, listening context, and lifestyle. Research Design: This is an observational study. Study Sample: Twelve and twenty-seven adults with hearing impairment participated in Experiments 1 and 2, respectively. Data Collection and Analysis: In the laboratory testing of Experiment 1, participants estimated their speech recognition performance in settings wherein the signal-to-noise ratio was fixed or constantly varied across sentences. In the field testing the participants reported the listening context (e.g., noisiness level) of several semicontrolled real-world conversations. Their reports were compared to (1) the context described by normal-hearing observers and (2) the background noise level measured using a sound level meter. In Experiment 2, participants repeatedly reported the degree of speech understanding, hearing aid use, and listening context using paper-and-pencil journals in their natural environments for 1 week. They also carried noise dosimeters to measure the sound level. The associations between (1) speech understanding, hearing aid use, and listening context, (2) dosimeter sound level and self-reported noisiness level, and (3) dosimeter data and lifestyle quantified using the journals were examined. Results: For Experiment 1, the reported and measured speech recognition scores were highly correlated across all test conditions (r = 0.94 to 0.97). The field testing results revealed that most listening context properties reported by the participants were highly consistent with those described by the observers (74–95% consistency), except for noisiness rating (58%). Nevertheless, higher noisiness rating was associated with higher background noise level. For Experiment 2, the EMA results revealed several associations: better speech understanding was associated with the use of hearing aids, front-located speech, and lower dosimeter sound level; higher noisiness rating was associated with higher dosimeter sound level; listeners with more diverse lifestyles tended to have higher dosimeter sound levels. Conclusions: Adults with hearing impairment were able to report their listening experiences, such as speech understanding, and characterize listening context in controlled environments with reasonable accuracy. The pattern of the data aggregated across multiple EMA surveys conducted in a wide range of uncontrolled real-world environment was consistent with the established knowledge in audiology. The two experiments suggested that, regarding speech understanding and related listening contexts, EMA reflects what it is intended to measure, supporting its construct validity in audiology research.


JAMA ◽  
2017 ◽  
Vol 318 (1) ◽  
pp. 89 ◽  
Author(s):  
Nicholas S. Reed ◽  
Joshua Betz ◽  
Nicole Kendig ◽  
Margaret Korczak ◽  
Frank R. Lin

1994 ◽  
Vol 3 (2) ◽  
pp. 59-64 ◽  
Author(s):  
Stephanie A. Davidson ◽  
Colleen M. Noe

Ten experienced hearing aid users were tested to evaluate an assistive listening device inductively coupled to three different hearing aids—their own BTE hearing aid and associated telecoil, a programmable hearing aid with the telecoil programmed using the manufacturer's algorithm, and the same programmable hearing aid with the telecoil programmed so that the real-ear gain obtained with the hearing aid-assistive listening device combination matched a prescriptive target. Results indicated that modifying the telecoil response to match a prescriptive target can result in enhanced speech understanding and higher preference rankings.


2021 ◽  
Vol 42 (03) ◽  
pp. 295-308
Author(s):  
David A. Fabry ◽  
Achintya K. Bhowmik

AbstractThis article details ways that machine learning and artificial intelligence technologies are being integrated in modern hearing aids to improve speech understanding in background noise and provide a gateway to overall health and wellness. Discussion focuses on how Starkey incorporates automatic and user-driven optimization of speech intelligibility with onboard hearing aid signal processing and machine learning algorithms, smartphone-based deep neural network processing, and wireless hearing aid accessories. The article will conclude with a review of health and wellness tracking capabilities that are enabled by embedded sensors and artificial intelligence.


2020 ◽  
Author(s):  
Solveig Christina Voss ◽  
M Kathleen Pichora-Fuller ◽  
Ieda Ishida ◽  
April Emily Pereira ◽  
Julia Seiter ◽  
...  

Background:Conventional directional hearing aid microphone technology would obstruct listening intentions in walking situations when the talker and listener walk side by side. The purpose of the current study was to evaluate hearing aids that use a motion sensor to address listening needs during walking. Methods:Participants were 22 older adults with moderate-to-severe hearing loss and experience using hearing aids. Each participant completed two walks in randomized order, one walk with each of two hearing aid programs: 1) a conventional classifier that activated an adaptive, multiband beamformer in loud environments and 2) a classifier that additionally utilized motion-based beamformer steering. Participants walked along a pre-defined track and completed tasks assessing speech understanding and environmental awareness. Results:Most participants preferred the motion-based beamformer steering for speech understanding, environmental awareness, overall listening, and sound quality (p<0.05). Additionally, measures of speech understanding (p<0.01) and localization of sound stimuli (p<0.05) were significantly better with the motion-based beamformer steering than with the conventional classifier.Conclusion:The results suggest that hearing aid users benefit from classifiers that use motion sensor input to adapt the signal processing according to the user’s activity. The real-world setup of this study had limitations but also high ecological validity.


2019 ◽  
pp. 1357633X1988354 ◽  
Author(s):  
Frederic Venail ◽  
Marie C Picot ◽  
Gregory Marin ◽  
Sylvain Falinower ◽  
Jacques Samson ◽  
...  

Introduction Current literature does not provide strong evidence that remote programming of hearing aids is effective, despite its increasing use by audiologists. We tested speech perception outcomes, real-ear insertion gain, and changes in self-perceived hearing impairment after face-to-face and remote programming of hearing aids in a randomized multicentre, single-blind crossover study. Methods Adult experienced hearing aid users were enrolled during routine follow-up visits to audiology clinics. Hearing aids were programmed both face to face and remotely, then participants randomly received either the face-to-face or remote settings in a blinded manner and were evaluated 5 weeks later. Participants then received the other settings and were evaluated 5 weeks later. Results Data from 52 out of 60 participants were analysed. We found excellent concordance in performance of hearing aids programmed face to face and remotely for speech understanding in quiet (phonetically balanced kindergarten test – intraclass correlation coefficient of 0.92 (95% confidence interval: 0.87–0.95)), and good concordance in performance for speech understanding in noise (phonetically balanced kindergarten +5 dB signal-to-noise ratio – intraclass correlation coefficient of 0.71 (95% confidence interval: 0.55–0.82)). Face-to-face and remote programming took 10 minutes (±2.9) and 10 minutes (±2.8), respectively. Real-ear insertion gains were highly correlated for input sound at 50, 65 and 80 dB sound pressure levels. The programming type did not affect the abbreviated profile of hearing aid questionnaire scores. Conclusions In experienced hearing aid users, face-to-face and remote programming of hearing aids give similar results in terms of speech perception, with no increase in the time spent on patients’ care and no difference in self-reported hearing benefit. ClinicalTrials.gov Identifier NCT02589561


Sign in / Sign up

Export Citation Format

Share Document