scholarly journals Effects of Hearing Aid Noise Reduction on Early and Late Cortical Representations of Competing Talkers in Noise

2021 ◽  
Vol 15 ◽  
Author(s):  
Emina Alickovic ◽  
Elaine Hoi Ning Ng ◽  
Lorenz Fiedler ◽  
Sébastien Santurette ◽  
Hamish Innes-Brown ◽  
...  

ObjectivesPrevious research using non-invasive (magnetoencephalography, MEG) and invasive (electrocorticography, ECoG) neural recordings has demonstrated the progressive and hierarchical representation and processing of complex multi-talker auditory scenes in the auditory cortex. Early responses (<85 ms) in primary-like areas appear to represent the individual talkers with almost equal fidelity and are independent of attention in normal-hearing (NH) listeners. However, late responses (>85 ms) in higher-order non-primary areas selectively represent the attended talker with significantly higher fidelity than unattended talkers in NH and hearing–impaired (HI) listeners. Motivated by these findings, the objective of this study was to investigate the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG).DesignWe addressed this issue by investigating early (<85 ms) and late (>85 ms) EEG responses recorded in 34 HI subjects fitted with HAs. The HA noise reduction (NR) was either on or off while the participants listened to a complex auditory scene. Participants were instructed to attend to one of two simultaneous talkers in the foreground while multi-talker babble noise played in the background (+3 dB SNR). After each trial, a two-choice question about the content of the attended speech was presented.ResultsUsing a stimulus reconstruction approach, our results suggest that the attention-related enhancement of neural representations of target and masker talkers located in the foreground, as well as suppression of the background noise in distinct hierarchical stages is significantly affected by the NR scheme. We found that the NR scheme contributed to the enhancement of the foreground and of the entire acoustic scene in the early responses, and that this enhancement was driven by better representation of the target speech. We found that the target talker in HI listeners was selectively represented in late responses. We found that use of the NR scheme resulted in enhanced representations of the target and masker speech in the foreground and a suppressed representation of the noise in the background in late responses. We found a significant effect of EEG time window on the strengths of the cortical representation of the target and masker.ConclusionTogether, our analyses of the early and late responses obtained from HI listeners support the existing view of hierarchical processing in the auditory cortex. Our findings demonstrate the benefits of a NR scheme on the representation of complex multi-talker auditory scenes in different areas of the auditory cortex in HI listeners.

2017 ◽  
Author(s):  
Krishna C. Puvvada ◽  
Jonathan Z. Simon

AbstractThe ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically-based representations in the auditory nerve, into perceptually distinct auditory-objects based representation in auditory cortex. Here, using magnetoencephalography (MEG) recordings from human subjects, both men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in auditory cortex contain dominantly spectro-temporal based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. In contrast, we also show that higher order auditory cortical areas represent the attended stream separately, and with significantly higher fidelity, than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Taken together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of human auditory cortex.Significance StatementUsing magnetoencephalography (MEG) recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of auditory cortex. We show that the primary-like areas in auditory cortex use a dominantly spectro-temporal based representation of the entire auditory scene, with both attended and ignored speech streams represented with almost equal fidelity. In contrast, we show that higher order auditory cortical areas represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.


2020 ◽  
Vol 24 ◽  
pp. 233121652096086
Author(s):  
Mengfan Wu ◽  
Raul Sanchez-Lopez ◽  
Mouhamad El-Haj-Ali ◽  
Silje G. Nielsen ◽  
Michal Fereczkowski ◽  
...  

Effective hearing aid (HA) rehabilitation requires personalization of the HA fitting parameters, but in current clinical practice only the gain prescription is typically individualized. To optimize the fitting process, advanced HA settings such as noise reduction and microphone directionality can also be tailored to individual hearing deficits. In two earlier studies, an auditory test battery and a data-driven approach that allow classifying hearing-impaired listeners into four auditory profiles were developed. Because these profiles were found to be characterized by markedly different hearing abilities, it was hypothesized that more tailored HA fittings would lead to better outcomes for such listeners. Here, we explored potential interactions between the four auditory profiles and HA outcome as assessed with three different measures (speech recognition, overall quality, and noise annoyance) and six HA processing strategies with various noise reduction, directionality, and compression settings. Using virtual acoustics, a realistic speech-in-noise environment was simulated. The stimuli were generated using a HA simulator and presented to 49 habitual HA users who had previously been profiled. The four auditory profiles differed clearly in terms of their mean aided speech reception thresholds, thereby implying different needs in terms of signal-to-noise ratio improvement. However, no clear interactions with the tested HA processing strategies were found. Overall, these findings suggest that the auditory profiles can capture some of the individual differences in HA processing needs and that further research is required to identify suitable HA solutions for them.


Author(s):  
Isiaka Ajewale Alimi

Digital hearing aids addresses the issues of noise and speech intelligibility that is associated with the analogue types. One of the main functions of the digital signal processor (DSP) of digital hearing aid systems is noise reduction which can be achieved by speech enhancement algorithms which in turn improve system performance and flexibility. However, studies have shown that the quality of experience (QoE) with some of the current hearing aids is not up to expectation in a noisy environment due to interfering sound, background noise and reverberation. It is also suggested that noise reduction features of the DSP can be further improved accordingly. Recently, we proposed an adaptive spectral subtraction algorithm to enhance the performance of communication systems and address the issue of associated musical noise generated by the conventional spectral subtraction algorithm. The effectiveness of the algorithm has been confirmed by different objective and subjective evaluations. In this study, an adaptive spectral subtraction algorithm is implemented using the noise-estimation algorithm for highly non-stationary noisy environments instead of the voice activity detection (VAD) employed in our previous work due to its effectiveness. Also, signal to residual spectrum ratio (SR) is implemented in order to control the amplification distortion for speech intelligibility improvement. The results show that the proposed scheme gives comparatively better performance and can be easily employed in digital hearing aid system for improving speech quality and intelligibility.


1990 ◽  
Vol 64 (3) ◽  
pp. 888-902 ◽  
Author(s):  
R. Rajan ◽  
L. M. Aitkin ◽  
D. R. Irvine

1. The organization of azimuthal sensitivity of units across the dorsoventral extent of primary auditory cortex (AI) was studied in electrode penetrations made along frequency-band strips of AI. Azimuthal sensitivity for each unit was represented by a mean azimuth function (MF) calculated from all azimuth functions obtained to characteristic frequency (CF) stimuli at intensities 20 dB or more greater than threshold. MFs were classified as contrafield, ipsi-field, central-field, omnidirectional, or multipeaked, according to the criteria established in the companion paper (Rajan et al. 1990). 2. The spatial distribution of three types of MFs was not random across frequency-band strips: for contra-field, ipsi-field, and central-field MFs there was a significant tendency for clustering of functions of the same type in sequentially encountered units. Occasionally, repeated clusters of a particular MF type could be found along a frequency-band strip. In contrast, the spatial distribution of omnidirectional MFs along frequency-band strips appeared to be random. 3. Apart from the clustering of MF types, there were also regions along a frequency-band strip in which there were rapid changes in the type of MF encountered in units isolated over short distances. Most often such changes took the form of irregular, rapid juxtapositions of MF types. Less frequently such changes appeared to show more systematic changes from one type of MF to another type. In contrast to these changes in azimuthal sensitivity seen in electrode penetrations oblique to the cortical surface, much less change in azimuthal sensitivity was seen in the form of azimuthal sensitivity displayed by successively isolated units in penetrations made normal to the cortical surface. 4. To determine whether some significant feature or features of azimuthal sensitivity shifted in a more continuous and/or systematic manner along frequency-band strips, azimuthal sensitivity was quantified in terms of the peak-response azimuth (PRA) of the MFs of successive units and of the azimuthal range over which the peaks occurred in the individual azimuth functions contributing to each MF (the peak-response range). In different experiments shifts in these measures of the peaks in successively isolated units along a frequency-band strip were found generally to fall into one of four categories: 1) shifts across the entire frontal hemifield; 2) clustering in the contralateral quadrant; 3) clustering in the ipsilateral quadrant; and 4) clustering about the midline. In two cases more than one of these four patterns were found along a frequency-band strip.(ABSTRACT TRUNCATED AT 400 WORDS)


2016 ◽  
Vol 27 (09) ◽  
pp. 732-749 ◽  
Author(s):  
Gabriel Aldaz ◽  
Sunil Puria ◽  
Larry J. Leifer

Background: Previous research has shown that hearing aid wearers can successfully self-train their instruments’ gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the “untrained system,” that is, the manufacturer’s algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The “trained system” first learned each individual’s preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). Purpose: To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. Research Design: An experimental within-participants study. Participants used a prototype hearing system—comprising two hearing aids, Android smartphone, and body-worn gateway device—for ˜6 weeks. Study Sample: Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Intervention: Participants were fitted and instructed to perform daily comparisons of settings (“listening evaluations”) through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone—including environmental sound classification, sound level, and location—to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system (“trained settings”) to those suggested by the hearing aids’ untrained system (“untrained settings”). Data Collection and Analysis: We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Results: Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. Conclusions: The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone.


2012 ◽  
Vol 23 (08) ◽  
pp. 606-615 ◽  
Author(s):  
HaiHong Liu ◽  
Hua Zhang ◽  
Ruth A. Bentler ◽  
Demin Han ◽  
Luo Zhang

Background: Transient noise can be disruptive for people wearing hearing aids. Ideally, the transient noise should be detected and controlled by the signal processor without disrupting speech and other intended input signals. A technology for detecting and controlling transient noises in hearing aids was evaluated in this study. Purpose: The purpose of this study was to evaluate the effectiveness of a transient noise reduction strategy on various transient noises and to determine whether the strategy has a negative impact on sound quality of intended speech inputs. Research Design: This was a quasi-experimental study. The study involved 24 hearing aid users. Each participant was asked to rate the parameters of speech clarity, transient noise loudness, and overall impression for speech stimuli under the algorithm-on and algorithm-off conditions. During the evaluation, three types of stimuli were used: transient noises, speech, and background noises. The transient noises included “knife on a ceramic board,” “mug on a tabletop,” “office door slamming,” “car door slamming,” and “pen tapping on countertop.” The speech sentences used for the test were presented by a male speaker in Mandarin. The background noises included “party noise” and “traffic noise.” All of these sounds were combined into five listening situations: (1) speech only, (2) transient noise only, (3) speech and transient noise, (4) background noise and transient noise, and (5) speech and background noise and transient noise. Results: There was no significant difference on the ratings of speech clarity between the algorithm-on and algorithm-off (t-test, p = 0.103). Further analysis revealed that speech clarity was significant better at 70 dB SLP than 55 dB SPL (p < 0.001). For transient noise loudness: under the algorithm-off condition, the percentages of subjects rating the transient noise to be somewhat soft, appropriate, somewhat loud, and too loud were 0.2, 47.1, 29.6, and 23.1%, respectively. The corresponding percentages under the algorithm-on were 3.0, 72.6, 22.9, and 1.4%, respectively. A significant difference on the ratings of the transient noise loudness was found between the algorithm-on and algorithm-off (t-test, p < 0.001). For overall impression for speech stimuli: under the algorithm-off condition, the percentage of subjects rating the algorithm to be not helpful at all, somewhat helpful, helpful, and very helpful for speech stimuli were 36.5, 20.8, 33.9, and 8.9%, respectively. Under the algorithm-on condition, the corresponding percentages were 35.0, 19.3, 30.7, and 15.0%, respectively. Statistical analysis revealed there was a significant difference on the ratings of overall impression on speech stimuli. The ratings under the algorithm-on condition were significantly more helpful for speech understanding than the ratings under algorithm-off (t-test, p < 0.001). Conclusions: The transient noise reduction strategy appropriately controlled the loudness for most of the transient noises and did not affect the sound quality, which could be beneficial to hearing aid wearers.


2019 ◽  
pp. 83-100
Author(s):  
György Buzsáki

To effectively send a message, a single neuron must cooperate with its peers. Such cooperation can be achieved by synchronizing their spikes together within the time window limited by the ability of the downstream reader neuron to integrate the incoming signals. Therefore, the cell assembly, defined from the point of view of the reader neuron, can be considered as a unit of neuronal communication, a “neuronal letter.” Acting in assemblies has several advantages. A cooperative assembly partnership tolerates spike rate variation in individual cells effectively because the total excitatory effect of the assembly is what matters to the reader mechanism. Interacting assembly members can compute probabilities rather than convey deterministic information and can robustly tolerate noise even if the individual members respond probabilistically.


Author(s):  
Mustafa Al-Shamsi ◽  
Serge Jennes

AbstractIntroduction:Burn disasters represent a real challenge to burn centers worldwide. Several burn disasters with a considerable number of casualties happened in Belgium in the past. The positioning of burn centers is a significant issue to account for in a burn disaster preparedness and response. The objectives of this study are to identify the geographic coverage and accessibility of the burn centers in Belgium in the realm of a burn disaster scenario.Method:Cross-sectional secondary analysis was performed using data from the Belgian Burn Association and Belgian Department of the Statistic. Data were analyzed using ArcGIS, a geographic information system tool to identify the coverage of burn centers within half an hour driving time, and access time of both populations in the districts and the disaster-prone areas to the individual burn centers.Results:Around 7.3 million (65%) people are covered by a half an hour driving time window from the burn centers. However, the accessibility to the individual burn centers is varied across different regions and provinces.Conclusion:There is a slightly over-supply of burn centers in the mid part of the country, contrasted by an under-supply and poor accessibility for the population living near the borders, particularly in the south part of the country. This study would provide a benchmark for stakeholders in Belgium and other industrial countries to consider the coverage and accessibility of the burn centers as part of preparation and planning for burn disasters in the future.


2015 ◽  
Vol 113 (2) ◽  
pp. 475-486
Author(s):  
Melanie A. Kok ◽  
Daniel Stolzberg ◽  
Trecia A. Brown ◽  
Stephen G. Lomber

Current models of hierarchical processing in auditory cortex have been based principally on anatomical connectivity while functional interactions between individual regions have remained largely unexplored. Previous cortical deactivation studies in the cat have addressed functional reciprocal connectivity between primary auditory cortex (A1) and other hierarchically lower level fields. The present study sought to assess the functional contribution of inputs along multiple stages of the current hierarchical model to a higher order area, the dorsal zone (DZ) of auditory cortex, in the anaesthetized cat. Cryoloops were placed over A1 and posterior auditory field (PAF). Multiunit neuronal responses to noise burst and tonal stimuli were recorded in DZ during cortical deactivation of each field individually and in concert. Deactivation of A1 suppressed peak neuronal responses in DZ regardless of stimulus and resulted in increased minimum thresholds and reduced absolute bandwidths for tone frequency receptive fields in DZ. PAF deactivation had less robust effects on DZ firing rates and receptive fields compared with A1 deactivation, and combined A1/PAF cooling was largely driven by the effects of A1 deactivation at the population level. These results provide physiological support for the current anatomically based model of both serial and parallel processing schemes in auditory cortical hierarchical organization.


Sign in / Sign up

Export Citation Format

Share Document