binaural summation
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 2)

H-INDEX

16
(FIVE YEARS 0)

2021 ◽  
Vol 12 ◽  
Author(s):  
Iko Pieper ◽  
Manfred Mauermann ◽  
Birger Kollmeier ◽  
Stephan D. Ewert

The individual loudness perception of a patient plays an important role in hearing aid satisfaction and use in daily life. Hearing aid fitting and development might benefit from individualized loudness models (ILMs), enabling better adaptation of the processing to individual needs. The central question is whether additional parameters are required for ILMs beyond non-linear cochlear gain loss and linear attenuation common to existing loudness models for the hearing impaired (HI). Here, loudness perception in eight normal hearing (NH) and eight HI listeners was measured in conditions ranging from monaural narrowband to binaural broadband, to systematically assess spectral and binaural loudness summation and their interdependence. A binaural summation stage was devised with empirical monaural loudness judgments serving as input. While NH showed binaural inhibition in line with the literature, binaural summation and its inter-subject variability were increased in HI, indicating the necessity for individualized binaural summation. Toward ILMs, a recent monaural loudness model was extended with the suggested binaural stage, and the number and type of additional parameters required to describe and to predict individual loudness were assessed. In addition to one parameter for the individual amount of binaural summation, a bandwidth-dependent monaural parameter was required to successfully account for individual spectral summation.



Author(s):  
Feike de Graaff ◽  
Robert H. Eikelboom ◽  
Cathy Sucher ◽  
Sophia E. Kramer ◽  
Cas Smits


2020 ◽  
Vol 14 ◽  
Author(s):  
Tobias Balkenhol ◽  
Elisabeth Wallhäusser-Franke ◽  
Nicole Rotter ◽  
Jérôme J. Servais

Cochlear implants (CI) improve hearing for the severely hearing impaired. With an extension of implantation candidacy, today many CI listeners use a hearing aid on their contralateral ear, referred to as bimodal listening. It is uncertain, however, whether the brains of bimodal listeners can combine the electrical and acoustical sound information and how much CI experience is needed to achieve an improved performance with bimodal listening. Patients with bilateral sensorineural hearing loss undergoing implant surgery were tested in their ability to understand speech in quiet and in noise, before and again 3 and 6 months after provision of a CI. Results of these bimodal listeners were compared to age-matched, normal hearing controls (NH). The benefit of adding a contralateral hearing aid was calculated in terms of head shadow, binaural summation, binaural squelch, and spatial release from masking from the results of a sentence recognition test. Beyond that, bimodal benefit was estimated from the difference in amplitudes and latencies of the N1, P2, and N2 potentials of the brains’ auditory evoked response (AEP) toward speech. Data of fifteen participants contributed to the results. CI provision resulted in significant improvement of speech recognition with the CI ear, and in taking advantage of the head shadow effect for understanding speech in noise. Some amount of binaural processing was suggested by a positive binaural summation effect 6 month post-implantation that correlated significantly with symmetry of pure tone thresholds. Moreover, a significant negative correlation existed between binaural summation and latency of the P2 potential. With CI experience, morphology of the N1 and P2 potentials in the AEP response approximated that of NH, whereas, N2 remained different. Significant AEP differences between monaural and binaural processing were shown for NH and for bimodal listeners 6 month post-implantation. Although the grand-averaged difference in N1 amplitude between monaural and binaural listening was similar for NH and the bimodal group, source localization showed group-dependent differences in auditory and speech-relevant cortex, suggesting different processing in the bimodal listeners.



2020 ◽  
Vol I (1) ◽  
pp. 15-18
Author(s):  
Georgios K Panagiotopoulos

Unilateral Sensorineural Hearing Loss (USNHL) or even Single Sided Deafness (SSD) were mistakenly believed in the past that they could not induce a notable negative effect on the average individual adult. Respectively, a child with USNHL could eventually develop typically and adequately with no particular challenges. Today, it is well established that both children and adults with USNHL and SSD experience difficulties locating sound sources than their normal peers attributable to the concomitant deprivation of data utilized for localization; interaural time differences along with interaural intensity differences, especially for high frequency sounds. Moreover, USNHL and SSD patients suffer from the absence of the binaural benefits that permit people with bilateral Normal Hearing (NH) to perform relatively well in challenging listening environments. These benefits encompass binaural summation that causes improved speech perception, and binaural release from masking that facilitates word recognition in noise. Rising treatment strategies, involving various type of amplification, Assistive Listening Devices (ALSs) and Cochlear Implantation, can greatly widen our overall approach regarding USNHL and / or SSD. Nevertheless, most recent evidence points out that both prompt and adequate intervention is crucial to promote optimal outcomes.



2020 ◽  
Vol 10 (1) ◽  
Author(s):  
D. H. Baker ◽  
G. Vilidaite ◽  
E. McClarnon ◽  
E. Valkova ◽  
A. Bruno ◽  
...  


2018 ◽  
Author(s):  
D.H. Baker ◽  
G. Vilidaite ◽  
E. McClarnon ◽  
E. Valkova ◽  
A. Bruno ◽  
...  

AbstractThe brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.



2018 ◽  
Vol 57 (7) ◽  
pp. 493-501
Author(s):  
Vishakha W. Rawool ◽  
Madaline Parrill


2017 ◽  
Vol 141 (5) ◽  
pp. 3636-3636
Author(s):  
Gregory M. Ellis ◽  
Pavel Zahorik
Keyword(s):  


2016 ◽  
Vol 37 (5) ◽  
pp. 499-503 ◽  
Author(s):  
Bradley W. Kesser ◽  
Erika D. Cole ◽  
Lincoln C. Gray


2015 ◽  
Vol 138 (3) ◽  
pp. 1889-1890 ◽  
Author(s):  
Colin J. Novak ◽  
Jeremy Charbonneau


Sign in / Sign up

Export Citation Format

Share Document