Creating a sound field for manipulating interaural level differences separately or independently of the interaural time differences and vice versa

1988 ◽  
Vol 83 (S1) ◽  
pp. S121-S121
Author(s):  
George F. Kuhn
2021 ◽  
Vol 25 ◽  
pp. 233121652110311
Author(s):  
Sam Watson ◽  
Søren Laugesen ◽  
Bastian Epp

An aided sound-field auditory steady-state response (ASSR) has the potential to be used to objectively validate hearing-aid (HA) fittings in clinics. Each aided ear should ideally be tested independently, but it is suspected that binaural testing may be used by clinics to reduce test time. This study simulates dichotic ASSR sound-field conditions to examine the risk of making false judgments due to unchecked binaural effects. Unaided ASSRs were recorded with a clinical two-channel electroencephalography (EEG) system for 15 normal hearing subjects using a three-band CE-Chirp® stimulus. It was found that the noise corrected power of a response harmonic can be suppressed by up to 10 dB by introducing large interaural time differences equal to half the time period of the stimulus envelope, which may occur in unilateral HA users. These large interaural time differences also changed the expression of ASSR power across the scalp, resulting in dramatically altered topographies. This would lead to considerably lower measured response power and possibly nondetections, evidencing that even well fit HAs are fit poorly (false referral), whereas monaural ASSR tests would pass. No effect was found for simulated lateralizations of the stimulus, which is beneficial for a proposed aided ASSR approach. Full-scalp ASSR recordings match previously found 40 Hz topographies but demonstrate suppression of cortical ASSR sources when using stimuli in interaural envelope antiphase.


2019 ◽  
Vol 9 (6) ◽  
pp. 1226 ◽  
Author(s):  
Thomas McKenzie ◽  
Damian Murphy ◽  
Gavin Kearney

Ambisonics is a spatial audio technique appropriate for dynamic binaural rendering due to its sound field rotation and transformation capabilities, which has made it popular for virtual reality applications. An issue with low-order Ambisonics is that interaural level differences (ILDs) are often reproduced with lower values when compared to head-related impulse responses (HRIRs), which reduces lateralization and spaciousness. This paper introduces a method of Ambisonic ILD Optimization (AIO), a pre-processing technique to bring the ILDs produced by virtual loudspeaker binaural Ambisonic rendering closer to those of HRIRs. AIO is evaluated objectively for Ambisonic orders up to fifth order versus a reference dataset of HRIRs for all locations on the sphere via estimated ILD and spectral difference, and perceptually through listening tests using both simple and complex scenes. Results conclude AIO produces an overall improvement for all tested orders of Ambisonics, though the benefits are greatest at first and second order.


1969 ◽  
Vol 12 (1) ◽  
pp. 5-38 ◽  
Author(s):  
Donald D. Dirks ◽  
Richard H. Wilson

A series of five experiments was conducted to investigate the effects of spatial separation of speakers on the intelligibility of spondaic and PB words in noise and the identification of synthetic sentences in noise and competing message. Conditions in which the spatial location of the speakers produced interaural time differences ranked highest in intelligibility. The rank order of other conditions was dependent on the S/N ratio at the monaural near ear. Separations of only 10° between the speech and noise sources resulted in measurable changes in intelligibility. The binaural intelligibility scores were enhanced substantially over the monaural near ear results during conditions where an interaural time difference was present. This result was observed more effectively when spondaic words or sentences were used rather than PB words. The implications of this result were related to the interaural time difference and the frequency range of the critical information in the primary message. Although the initial experiments were facilitated by recording through an artificial head, almost identical results were obtained in the final experiment when subjects were tested in the sound field.


2011 ◽  
Vol 106 (2) ◽  
pp. 974-985 ◽  
Author(s):  
Sean J. Slee ◽  
Eric D. Young

Previous studies have demonstrated that single neurons in the central nucleus of the inferior colliculus (ICC) are sensitive to multiple sound localization cues. We investigated the hypothesis that ICC neurons are specialized to encode multiple sound localization cues that are aligned in space (as would naturally occur from a single broadband sound source). Sound localization cues including interaural time differences (ITDs), interaural level differences (ILDs), and spectral shapes (SSs) were measured in a marmoset monkey. Virtual space methods were used to generate stimuli with aligned and misaligned combinations of cues while recording in the ICC of the same monkey. Mutual information (MI) between spike rates and stimuli for aligned versus misaligned cues were compared. Neurons with best frequencies (BFs) less than ∼11 kHz mostly encoded information about a single sound localization cue, ITD or ILD depending on frequency, consistent with the dominance of ear acoustics by either ITD or ILD at those frequencies. Most neurons with BFs >11 kHz encoded information about multiple sound localization cues, usually ILD and SS, and were sensitive to their alignment. In some neurons MI between stimuli and spike responses was greater for aligned cues, while in others it was greater for misaligned cues. If SS cues were shifted to lower frequencies in the virtual space stimuli, a similar result was found for neurons with BFs <11 kHz, showing that the cue interaction reflects the spectra of the stimuli and not a specialization for representing SS cues. In general the results show that ICC neurons are sensitive to multiple localization cues if they are simultaneously present in the frequency response area of the neuron. However, the representation is diffuse in that there is not a specialization in the ICC for encoding aligned sound localization cues.


2020 ◽  
Vol 148 (4) ◽  
pp. EL307-EL313
Author(s):  
L. Papet ◽  
M. Raymond ◽  
N. Boyer ◽  
N. Mathevon ◽  
N. Grimault

2021 ◽  
Vol 9 ◽  
Author(s):  
Andrew C. Mason

Insects are often small relative to the wavelengths of sounds they need to localize, which presents a fundamental biophysical problem. Understanding novel solutions to this limitation can provide insights for biomimetic technologies. Such an approach has been successful using the fly Ormia ochracea (Diptera: Tachinidae) as a model. O. ochracea is a parasitoid species whose larvae develop as internal parasites within crickets (Gryllidae). In nature, female flies find singing male crickets by phonotaxis, despite severe constraints on directional hearing due to their small size. A physical coupling between the two tympanal membranes allows the flies to obtain information about sound source direction with high accuracy because it generates interaural time-differences (ITD) and interaural level differences (ILD) in tympanal vibrations that are exaggerated relative to the small arrival-time difference at the two ears, that is the only cue available in the sound stimulus. In this study, I demonstrate that pure time-differences in the neural responses to sound stimuli are sufficient for auditory directionality in O. ochracea.


1969 ◽  
Vol 12 (3) ◽  
pp. 650-664 ◽  
Author(s):  
Donald D. Dirks ◽  
Richard A. Wilson

Differences in speech intelligibility and identification between binaural, monaural near ear, and monaural far ear conditions were studied in sound field conditions. Scores from listeners with normal hearing and with sensorineural losses were evalated in sound field conditions (unaided) and under conditions of hearing aid amplification (aided). For both conditions listeners with sensorineural hearing loss obtained a binaural advantage similar to that found for normal listeners. The binaural advantage could be demonstrated only when the primary and/or competing signal sources were located at an azimuth which resulted in interaural time differences for at least one of the signals. When the signals arrived simultaneously at the ears from the same loudspeaker, no binaural advantage was obtained. Differences in intelligibility and identification scores between monaural near ear and far ear conditions (6.0 dB) were almost twice as large as those found between binaural listening and monaural near ear listening (3.3 dB).


Perception ◽  
10.1068/p3293 ◽  
2002 ◽  
Vol 31 (7) ◽  
pp. 875-885 ◽  
Author(s):  
Dennis P Phillips ◽  
Susan E Hall ◽  
Susan E Boehnke ◽  
Leanna E D Rutherford

Auditory saltation is a misperception of the spatial location of repetitive, transient stimuli. It arises when clicks at one location are followed in perfect temporal cadence by identical clicks at a second location. This report describes two psychophysical experiments designed to examine the sensitivity of auditory saltation to different stimulus cues for auditory spatial perception. Experiment 1 was a dichotic study in which six different six-click train stimuli were used to generate the saltation effect. Clicks lateralised by using interaural time differences and clicks lateralised by using interaural level differences produced equivalent saltation effects, confirming an earlier finding. Switching the stimulus cue from an interaural time difference to an interaural level difference (or the reverse) in mid train was inconsequential to the saltation illusion. Experiment 2 was a free-field study in which subjects rated the illusory motion generated by clicks emitted from two sound sources symmetrically disposed around the interaural axis, ie on the same cone of confusion in the auditory hemifield opposite one ear. Stimuli in such positions produce spatial location judgments that are based more heavily on monaural spectral information than on binaural computations. The free-field stimuli produced robust saltation. The data from both experiments are consistent with the view that auditory saltation can emerge from spatial processing, irrespective of the stimulus cue information used to determine click laterality or location.


1973 ◽  
Vol 16 (2) ◽  
pp. 267-270 ◽  
Author(s):  
John H. Mills ◽  
Seija A. Talo ◽  
Gloria S. Gordon

Groups of monaural chinchillas trained in behavioral audiometry were exposed in a diffuse sound field to an octave-band noise centered at 4.0 k Hz. The growth of temporary threshold shift (TTS) at 5.7 k Hz from zero to an asymptote (TTS ∞ ) required about 24 hours, and the growth of TTS at 5.7 k Hz from an asymptote to a higher asymptote, about 12–24 hours. TTS ∞ can be described by the equation TTS ∞ = 1.6(SPL-A) where A = 47. These results are consistent with those previously reported in this journal by Carder and Miller and Mills and Talo. Whereas the decay of TTS ∞ to zero required about three days, the decay of TTS ∞ to a lower TTS ∞ required about three to seven days. The decay of TTS ∞ in noise, therefore, appears to require slightly more time than the decay of TTS ∞ in the quiet. However, for a given level of noise, the magnitude of TTS ∞ is the same regardless of whether the TTS asymptote is approached from zero, from a lower asymptote, or from a higher asymptote.


Sign in / Sign up

Export Citation Format

Share Document