Extraction of listening effort correlates in the oscillatory EEG activity: Investigation of different hearing aid configurations

Author(s):  
Corinna Bernarding ◽  
Ronny Hannemann ◽  
David Herrmann ◽  
Daniel J. Strauss ◽  
Farah I. Corona-Strauss
2017 ◽  
Vol 28 (09) ◽  
pp. 810-822 ◽  
Author(s):  
Benjamin J. Kirby ◽  
Judy G. Kopun ◽  
Meredith Spratford ◽  
Clairissa M. Mollak ◽  
Marc A. Brennan ◽  
...  

AbstractSloping hearing loss imposes limits on audibility for high-frequency sounds in many hearing aid users. Signal processing algorithms that shift high-frequency sounds to lower frequencies have been introduced in hearing aids to address this challenge by improving audibility of high-frequency sounds.This study examined speech perception performance, listening effort, and subjective sound quality ratings with conventional hearing aid processing and a new frequency-lowering signal processing strategy called frequency composition (FC) in adults and children.Participants wore the study hearing aids in two signal processing conditions (conventional processing versus FC) at an initial laboratory visit and subsequently at home during two approximately six-week long trials, with the order of conditions counterbalanced across individuals in a double-blind paradigm.Children (N = 12, 7 females, mean age in years = 12.0, SD = 3.0) and adults (N = 12, 6 females, mean age in years = 56.2, SD = 17.6) with bilateral sensorineural hearing loss who were full-time hearing aid users.Individual performance with each type of processing was assessed using speech perception tasks, a measure of listening effort, and subjective sound quality surveys at an initial visit. At the conclusion of each subsequent at-home trial, participants were retested in the laboratory. Linear mixed effects analyses were completed for each outcome measure with signal processing condition, age group, visit (prehome versus posthome trial), and measures of aided audibility as predictors.Overall, there were few significant differences in speech perception, listening effort, or subjective sound quality between FC and conventional processing, effects of listener age, or longitudinal changes in performance. Listeners preferred FC to conventional processing on one of six subjective sound quality metrics. Better speech perception performance was consistently related to higher aided audibility.These results indicate that when high-frequency speech sounds are made audible with conventional processing, speech recognition ability and listening effort are similar between conventional processing and FC. Despite the lack of benefit to speech perception, some listeners still preferred FC, suggesting that qualitative measures should be considered when evaluating candidacy for this signal processing strategy.


2019 ◽  
Vol 10 ◽  
Author(s):  
Serena Scarpelli ◽  
Aurora D'Atri ◽  
Chiara Bartolacci ◽  
Anastasia Mangiaruga ◽  
Maurizio Gorgoni ◽  
...  

Neuroscience ◽  
2017 ◽  
Vol 346 ◽  
pp. 81-93 ◽  
Author(s):  
J.H. Chien ◽  
L. Colloca ◽  
A. Korzeniewska ◽  
J.J. Cheng ◽  
C.M. Campbell ◽  
...  

2017 ◽  
Vol 11 (3) ◽  
pp. 203-215 ◽  
Author(s):  
Corinna Bernarding ◽  
Daniel J. Strauss ◽  
Ronny Hannemann ◽  
Harald Seidler ◽  
Farah I. Corona-Strauss
Keyword(s):  

2020 ◽  
Author(s):  
Cora Kubetschek ◽  
Christoph Kayser

AbstractMany studies speak in favor of a rhythmic mode of listening, by which the encoding of acoustic information is structured by rhythmic neural processes at the time scale of about 1 to 4 Hz. Indeed, psychophysical data suggest that humans sample acoustic information in extended soundscapes not uniformly, but weigh the evidence at different moments for their perceptual decision at the time scale of about 2 Hz. We here test the critical prediction that such rhythmic perceptual sampling is directly related to the state of ongoing brain activity prior to the stimulus. Human participants judged the direction of frequency sweeps in 1.2 s long soundscapes while their EEG was recorded. Computing the perceptual weights attributed to different epochs within these soundscapes contingent on the phase or power of pre-stimulus oscillatory EEG activity revealed a direct link between the 4Hz EEG phase and power prior to the stimulus and the phase of the rhythmic component of these perceptual weights. Hence, the temporal pattern by which the acoustic information is sampled over time for behavior is directly related to pre-stimulus brain activity in the delta/theta band. These results close a gap in the mechanistic picture linking ongoing delta band activity with their role in shaping the segmentation and perceptual influence of subsequent acoustic information.


Sign in / Sign up

Export Citation Format

Share Document