Sound Localization Under Perturbed Binaural Hearing

2007 ◽  
Vol 97 (1) ◽  
pp. 715-726 ◽  
Author(s):  
Marc M. Van Wanrooij ◽  
A. John Van Opstal

This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband (0.5–20 kHz; BB) noises, with sound levels between 30 and 60 dB, A-weighted (dBA). To deny listeners any consistent azimuth-related head-shadow cues, stimuli were randomly interleaved. A plug immediately degraded azimuth performance, as evidenced by a sound level–dependent shift (“bias”) of responses contralateral to the plug, and a level-dependent change in the slope of the stimulus–response relation (“gain”). Although the azimuth bias and gain were highly correlated, they could not be predicted from the plug's acoustic attenuation. Interestingly, listeners performed best for low-intensity stimuli at their normal-hearing side. These data demonstrate that listeners rely on monaural spectral cues for sound-source azimuth localization as soon as the binaural difference cues break down. Also the elevation response components were affected by the plug: elevation gain depended on both stimulus azimuth and on sound level and, as for azimuth, localization was best for low-intensity stimuli at the hearing side. Our results show that the neural computation of elevation incorporates a binaural weighting process that relies on the perceived, rather than the actual, sound-source azimuth. It is our conjecture that sound localization ensues from a weighting of all acoustic cues for both azimuth and elevation, in which the weights may be partially determined, and rapidly updated, by the reliability of the particular cue.

2004 ◽  
Vol 61 (7) ◽  
pp. 1057-1061 ◽  
Author(s):  
Arthur N. Popper ◽  
Dennis T.T. Plachta ◽  
David A. Mann ◽  
Dennis Higgs

Abstract A number of species of clupeid fish, including blueback herring, American shad, and gulf menhaden, can detect and respond to ultrasonic sounds up to at least 180 kHz, whereas other clupeids, including bay anchovies and Spanish sardines, do not appear to detect sounds above about 4 kHz. Although the location for ultrasound detection has not been proven conclusively, there is a growing body of physiological, developmental, and anatomical evidence suggesting that one end organ of the inner ear, the utricle, is likely to be the detector. The utricle is a region of the inner ear that is very similar in all vertebrates studied to date, except for clupeid fish, where it is highly specialized. Behavioural studies of the responses of American shad to ultrasound demonstrate that they show a graded series of responses depending on the sound level and, to a lesser degree, on the frequency of the stimulus. Low-intensity stimuli elicit a non-directional movement of the fish, whereas somewhat higher sound levels elicit a directional movement away from the sound source. Still higher level sounds produce a “wild” chaotic movement of the fish. These responses do not occur until shad have developed the adult utricle that has a three-part sensory epithelium. We speculate that the response of the American shad (and, presumably, other clupeids that can detect ultrasound) to ultrasound evolved to help these species detect and avoid a major predator – echolocating cetaceans. As dolphins echolocate, the fish are able to hear the sound at over 100 m. If the dolphins detect the fish and come closer, the nature of the behavioural response of the fish changes in order to exploit different avoidance strategies and lower the chance of being eaten by the predators.


2016 ◽  
Vol 115 (1) ◽  
pp. 193-207 ◽  
Author(s):  
Mitchell L. Day ◽  
Bertrand Delgutte

At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway.


2021 ◽  
Vol 70 (2) ◽  
pp. 65-73
Author(s):  
Miroslav Veselý ◽  
Břetislav Gál ◽  
Jiří Hložek ◽  
František Silný ◽  
Jan Hanák

Overview Introduction: Bonebridge is a direct bone conduction hearing implantable system. The aim of the work is to present pilot results of rehabilitation of single sided deafness using this system. Material and methods: Analysis of three patients with single-sideded deafness, who underwent BB implantation in 2018 at the Department of Otorhinolaryngology and Head and Neck Surgery of St. Anna Hospital in Brno. Evaluation parameters: Bern Benefi t in Single-Sided Deafness Questionnaire, experimental examination of directional hearing and hearing in noise test. Results: Questionnaire: Within the visual analog scale in the range of –5 to +5 points, the average rating was + 2.4 points, so listening was rated as easier with Bonebridge than without hearing aids. The ability to locate the sound source was evaluated by 4 and 0–1 points in one and two respondents, respectively. Examination of spatial hearing: without hearing aid, the ability to locate the sound source was signifi cantly impaired in all the examined. With Bonebridge, with a tolerated deviation of 45°, the success rate of sound source localization was 75–100% in the range of 0–360° in the horizontal plane. Hearing in noise test: the greatest improvement in intelligibility (by 30–100%) was achieved with Bonebridge at SNR –5 dB. Conclusion: Bonebridge is not able to restore binaural hearing in patients with single sided deafness, it is a pseudo-binaural correction. Like other implantable bone conduction systems, Bonebridge is benefi tial for patients with single sided deafness in a variety of listening situations. Using experimental audiological tests, the contribution of Bonebridge to understanding sentences in acoustic noise and improving the ability to locate the sound source was found. However, validation of the results would require a larger number of probands. Keywords: single-sided deafness – BAHD – Bonebridge – bone conduction hearing implant – hearing in noise – directional hearing test


2014 ◽  
Vol 111 (5) ◽  
pp. 930-938 ◽  
Author(s):  
Michael Kyweriga ◽  
Whitney Stewart ◽  
Michael Wehr

How does the brain accomplish sound localization with invariance to total sound level? Sensitivity to interaural level differences (ILDs) is first computed at the lateral superior olive (LSO) and is observed at multiple levels of the auditory pathway, including the central nucleus of inferior colliculus (ICC) and auditory cortex. In LSO, this ILD sensitivity is level-dependent, such that ILD response functions shift toward the ipsilateral (excitatory) ear with increasing sound level. Thus early in the processing pathway changes in firing rate could indicate changes in sound location, sound level, or both. In ICC, while ILD responses can shift toward either ear in individual neurons, there is no net ILD response shift at the population level. In behavioral studies of human sound localization acuity, ILD sensitivity is invariant to increasing sound levels. Level-invariant sound localization would suggest transformation in level sensitivity between LSO and perception of sound sources. Whether this transformation is completed at the level of the ICC or continued at higher levels remains unclear. It also remains unknown whether perceptual sound localization is level-invariant in rats, as it is in humans. We asked whether ILD sensitivity is level-invariant in rat auditory cortex. We performed single-unit and whole cell recordings in rat auditory cortex under ketamine anesthesia and measured responses to white noise bursts presented through sealed earphones at a range of ILDs. Surprisingly, we found that with increasing sound levels ILD responses shifted toward the ipsilateral ear (which is typically inhibitory), regardless of whether cells preferred ipsilateral, contralateral, or binaural stimuli. Voltage-clamp recordings suggest that synaptic inhibition does not contribute substantially to this transformation in level sensitivity. We conclude that the level invariance of ILD sensitivity seen in behavioral studies is not present in rat auditory cortex.


Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3446
Author(s):  
Muhammad Usman Liaquat ◽  
Hafiz Suliman Munawar ◽  
Amna Rahman ◽  
Zakria Qadir ◽  
Abbas Z. Kouzani ◽  
...  

Sound localization is a field of signal processing that deals with identifying the origin of a detected sound signal. This involves determining the direction and distance of the source of the sound. Some useful applications of this phenomenon exists in speech enhancement, communication, radars and in the medical field as well. The experimental arrangement requires the use of microphone arrays which record the sound signal. Some methods involve using ad-hoc arrays of microphones because of their demonstrated advantages over other arrays. In this research project, the existing sound localization methods have been explored to analyze the advantages and disadvantages of each method. A novel sound localization routine has been formulated which uses both the direction of arrival (DOA) of the sound signal along with the location estimation in three-dimensional space to precisely locate a sound source. The experimental arrangement consists of four microphones and a single sound source. Previously, sound source has been localized using six or more microphones. The precision of sound localization has been demonstrated to increase with the use of more microphones. In this research, however, we minimized the use of microphones to reduce the complexity of the algorithm and the computation time as well. The method results in novelty in the field of sound source localization by using less resources and providing results that are at par with the more complex methods requiring more microphones and additional tools to locate the sound source. The average accuracy of the system is found to be 96.77% with an error factor of 3.8%.


2021 ◽  
Vol 10 (14) ◽  
pp. 3078
Author(s):  
Sara Akbarzadeh ◽  
Sungmin Lee ◽  
Chin-Tuan Tan

In multi-speaker environments, cochlear implant (CI) users may attend to a target sound source in a different manner from normal hearing (NH) individuals during a conversation. This study attempted to investigate the effect of conversational sound levels on the mechanisms adopted by CI and NH listeners in selective auditory attention and how it affects their daily conversation. Nine CI users (five bilateral, three unilateral, and one bimodal) and eight NH listeners participated in this study. The behavioral speech recognition scores were collected using a matrix sentences test, and neural tracking to speech envelope was recorded using electroencephalography (EEG). Speech stimuli were presented at three different levels (75, 65, and 55 dB SPL) in the presence of two maskers from three spatially separated speakers. Different combinations of assisted/impaired hearing modes were evaluated for CI users, and the outcomes were analyzed in three categories: electric hearing only, acoustic hearing only, and electric + acoustic hearing. Our results showed that increasing the conversational sound level degraded the selective auditory attention in electrical hearing. On the other hand, increasing the sound level improved the selective auditory attention for the acoustic hearing group. In the NH listeners, however, increasing the sound level did not cause a significant change in the auditory attention. Our result implies that the effect of the sound level on selective auditory attention varies depending on the hearing modes, and the loudness control is necessary for the ease of attending to the conversation by CI users.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Piotr F. Czempik ◽  
Agnieszka Jarosińska ◽  
Krystyna Machlowska ◽  
Michał P. Pluta

Abstract Sleep disruption is common in patients in the intensive care unit (ICU). The aim of the study was to measure sound levels during sleep-protected time in the ICU, determine sources of sound, assess the impact of sound levels and patient-related factors on duration and quality of patients' sleep. The study was performed between 2018 and 2019. A commercially available smartphone application was used to measure ambient sound levels. Sleep duration was measured using the Patient's Sleep Behaviour Observational Tool. Sleep quality was assessed using the Richards-Campbell Sleep Questionnaire (RCSQ). The study population comprised 18 (58%) men and 13 (42%) women. There were numerous sources of sound. The median duration of sleep was 5 (IQR 3.5–5.7) hours. The median score on the RCSQ was 49 (IQR 28–71) out of 100 points. Sound levels were negatively correlated with sleep duration. The cut-off peak sound level, above which sleep duration was shorter than mean sleep duration in the cohort, was 57.9 dB. Simple smartphone applications can be useful to estimate sound levels in the ICU. There are numerous sources of sound in the ICU. Individual units should identify and eliminate their own sources of sound. Sources of sound producing peak sound levels above 57.9 dB may lead to shorter sleep and should be eliminated from the ICU environment. The sound levels had no effect on sleep quality.


2000 ◽  
Author(s):  
C. Gibbons ◽  
R. N. Miles

Abstract A miniature silicon condenser microphone diaphragm has been designed that exhibits good predicted directionality, sensitivity, and reliability. The design was based on the structure of a fly’s ear (Ormia ochracea) that has highly directional hearing through mechanical coupling of the eardrums. The diaphragm that is 1mm × 2mm × 20 microns is intended to be fabricated out of polysilicon through microelectromechanical micromachining. It was designed through the finite-element method in ANSYS in order to build the necessary mode shapes and frequencies into the mechanical behavior of the design. Through postprocessing of the ANSYS data, the diaphragm’s response to an arbitrary sound source, sensitivity, robustness, and Articulation Index - Directivity Index (AI-DI) were predicted. The design should yield a sensitivity as high as 100 mV/Pa, an AI-DI of 4.764 with Directivity Index as high as 6 between 1.5 and 5 kHz. The diaphragm structure was predicted be able to withstand a sound pressure level of 151.74 dB. The sound level that would result in collapse of the capacitive sensor is 129.9 dB.. The equivalent sound level due to the self-noise of the microphone is predicted to be 30.8 dBA.


PEDIATRICS ◽  
1975 ◽  
Vol 56 (4) ◽  
pp. 617-617
Author(s):  
Gōsta Blennow ◽  
Nils W. Svenningsen ◽  
Bengt Almquist

Recently we reported results from studies of incubator noise levels.1 It was found that in certain types of incubators the noise was considerable, and attention was called to the sound level in the construction of new incubators. Recently we had the opportunity to study an improved model of Isolette Infant Incubator Model C-86 where the mechanical noise from the electrically powered motor has been partially eliminated. With this modification it has been possible to lower the low-frequency sound levels to a certain degree in comparison to the levels registered in our study.


2016 ◽  
Vol 116 (6) ◽  
pp. 2550-2563 ◽  
Author(s):  
Calum Alex Grimsley ◽  
David Brian Green ◽  
Shobhana Sivaramakrishnan

The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level.


Sign in / Sign up

Export Citation Format

Share Document