scholarly journals Short-Latency, Goal-Directed Movements of the Pinnae to Sounds That Produce Auditory Spatial Illusions

2010 ◽  
Vol 103 (1) ◽  
pp. 446-457 ◽  
Author(s):  
Daniel J. Tollin ◽  
Elizabeth M. McClaine ◽  
Tom C. T. Yin

The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.

2004 ◽  
Vol 92 (6) ◽  
pp. 3286-3297 ◽  
Author(s):  
Daniel J. Tollin ◽  
Luis C. Populin ◽  
Tom C. T. Yin

Several auditory spatial illusions, collectively called the precedence effect (PE), occur when transient sounds are presented from two different spatial locations but separated in time by an interstimulus delay (ISD). For ISDs in the range of localization dominance (<10 ms), a single fused sound is typically located near the leading source location only, as if the location of the lagging source were suppressed. For longer ISDs, both the leading and lagging sources can be heard and localized, and the shortest ISD where this occurs is called the echo threshold. Previous physiological studies of the extracellular responses of single neurons in the inferior colliculus (IC) of anesthetized cats and unanesthetized rabbits with sounds known to elicit the PE have shown correlates of these phenomena though there were differences in the physiologically measured echo thresholds. Here we recorded in the IC of awake, behaving cats using stimuli that we have shown to evoke behavioral responses that are consistent with the precedence effect. For small ISDs, responses to the lag were reduced or eliminated consistent with psychophysical data showing that sound localization is based on the leading source. At longer ISDs, the responses to the lagging source recovered at ISDs comparable to psychophysically measured echo thresholds. Thus it appears that anesthesia, and not species differences, accounts for the discrepancies in the earlier studies.


2013 ◽  
Vol 280 (1769) ◽  
pp. 20131428 ◽  
Author(s):  
Ludwig Wallmeier ◽  
Nikodemus Geßele ◽  
Lutz Wiegrebe

Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task.


Author(s):  
Huakang Li ◽  
◽  
Jie Huang ◽  
Minyi Guo ◽  
Qunfei Zhao

Mobile robots communicating with people would benefit from being able to detect sound sources to help localize interesting events in real-life settings. We propose using a spherical robot with four microphones to determine the spatial locations of multiple sound sources in ordinary rooms. The arrival temporal disparities from phase difference histograms are used to calculate the time differences. A precedence effect model suppresses the influence of echoes in reverberant environments. To integrate spatial cues of different microphones, we map the correlation between different microphone pairs on a 3D map corresponding to the azimuth and elevation of sound source direction. Results of experiments indicate that our proposed system provides sound source distribution very clearly and precisely, even concurrently in reverberant environments with the Echo Avoidance (EA) model.


2009 ◽  
Vol 102 (2) ◽  
pp. 724-734 ◽  
Author(s):  
Micheal L. Dent ◽  
Daniel J. Tollin ◽  
Tom C. T. Yin

Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from ∼0.4 to 10 ms, summing localization for |ISDs| < 0.4 ms and breakdown of fusion for |ISDs| > 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space.


2003 ◽  
Vol 90 (4) ◽  
pp. 2149-2162 ◽  
Author(s):  
Daniel J. Tollin ◽  
Tom C.T. Yin

The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a “phantom” sound located between the sources, for delays < ±400 μs for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 μs to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 532
Author(s):  
Henglin Pu ◽  
Chao Cai ◽  
Menglan Hu ◽  
Tianping Deng ◽  
Rong Zheng ◽  
...  

Multiple blind sound source localization is the key technology for a myriad of applications such as robotic navigation and indoor localization. However, existing solutions can only locate a few sound sources simultaneously due to the limitation imposed by the number of microphones in an array. To this end, this paper proposes a novel multiple blind sound source localization algorithms using Source seParation and BeamForming (SPBF). Our algorithm overcomes the limitations of existing solutions and can locate more blind sources than the number of microphones in an array. Specifically, we propose a novel microphone layout, enabling salient multiple source separation while still preserving their arrival time information. After then, we perform source localization via beamforming using each demixed source. Such a design allows minimizing mutual interference from different sound sources, thereby enabling finer AoA estimation. To further enhance localization performance, we design a new spectral weighting function that can enhance the signal-to-noise-ratio, allowing a relatively narrow beam and thus finer angle of arrival estimation. Simulation experiments under typical indoor situations demonstrate a maximum of only 4∘ even under up to 14 sources.


Author(s):  
Simone Spagnol ◽  
Michele Geronazzo ◽  
Davide Rocchesso ◽  
Federico Avanzini

Purpose – The purpose of this paper is to present a system for customized binaural audio delivery based on the extraction of relevant features from a 2-D representation of the listener’s pinna. Design/methodology/approach – The most significant pinna contours are extracted by means of multi-flash imaging, and they provide values for the parameters of a structural head-related transfer function (HRTF) model. The HRTF model spatializes a given sound file according to the listener’s head orientation, tracked by sensor-equipped headphones, with respect to the virtual sound source. Findings – A preliminary localization test shows that the model is able to statically render the elevation of a virtual sound source better than non-individual HRTFs. Research limitations/implications – Results encourage a deeper analysis of the psychoacoustic impact that the individualized HRTF model has on perceived elevation of virtual sound sources. Practical implications – The model has low complexity and is suitable for implementation on mobile devices. The resulting hardware/software package will hopefully allow an easy and low-tech fruition of custom spatial audio to any user. Originality/value – The authors show that custom binaural audio can be successfully deployed without the need of cumbersome subjective measurements.


2021 ◽  
Vol 263 (6) ◽  
pp. 894-906
Author(s):  
Yannik Weber ◽  
Matthias Behrendt ◽  
Tobias Gohlke ◽  
Albert Albers

Preliminary work by the IPEK - Institute of Product Engineering at KIT has shown that the simulated pass-by measurement for exterior noise homologation of vehicles has relevant optimization potential: the measurement can be carried out in smaller halls and with a smaller measurement setup than required by the norm and thus with less construction cost and effort. A prerequisite for this however is the scaling of the entire setup. For the scaling in turn, the sound sources of the vehicle must be combined to a single point sound source - the acoustic centre. Previous approaches for conventional drives assume a static centre in the front part of the vehicle. For complex drive topologies, e.g. hybrid drives, and unsteady driving conditions, however, this assumption is not valid anymore. Therefore, with the help of an acoustic camera, a method for localizing the dominant sound sources of the vehicle and a software-based application for summarizing them to an acoustic centre were developed. The method is able to take into account stationary, unsteady and sudden events in the calculation of the acoustic centre, which is moved as a result. Using substitute sound sources and two vehicles, the method and the used measurement technology were examined and verified for their applicability.


Sign in / Sign up

Export Citation Format

Share Document