Spatial Stimulus Cue Information Supplying Auditory Saltation

Perception ◽  
10.1068/p3293 ◽  
2002 ◽  
Vol 31 (7) ◽  
pp. 875-885 ◽  
Author(s):  
Dennis P Phillips ◽  
Susan E Hall ◽  
Susan E Boehnke ◽  
Leanna E D Rutherford

Auditory saltation is a misperception of the spatial location of repetitive, transient stimuli. It arises when clicks at one location are followed in perfect temporal cadence by identical clicks at a second location. This report describes two psychophysical experiments designed to examine the sensitivity of auditory saltation to different stimulus cues for auditory spatial perception. Experiment 1 was a dichotic study in which six different six-click train stimuli were used to generate the saltation effect. Clicks lateralised by using interaural time differences and clicks lateralised by using interaural level differences produced equivalent saltation effects, confirming an earlier finding. Switching the stimulus cue from an interaural time difference to an interaural level difference (or the reverse) in mid train was inconsequential to the saltation illusion. Experiment 2 was a free-field study in which subjects rated the illusory motion generated by clicks emitted from two sound sources symmetrically disposed around the interaural axis, ie on the same cone of confusion in the auditory hemifield opposite one ear. Stimuli in such positions produce spatial location judgments that are based more heavily on monaural spectral information than on binaural computations. The free-field stimuli produced robust saltation. The data from both experiments are consistent with the view that auditory saltation can emerge from spatial processing, irrespective of the stimulus cue information used to determine click laterality or location.

1969 ◽  
Vol 12 (1) ◽  
pp. 5-38 ◽  
Author(s):  
Donald D. Dirks ◽  
Richard H. Wilson

A series of five experiments was conducted to investigate the effects of spatial separation of speakers on the intelligibility of spondaic and PB words in noise and the identification of synthetic sentences in noise and competing message. Conditions in which the spatial location of the speakers produced interaural time differences ranked highest in intelligibility. The rank order of other conditions was dependent on the S/N ratio at the monaural near ear. Separations of only 10° between the speech and noise sources resulted in measurable changes in intelligibility. The binaural intelligibility scores were enhanced substantially over the monaural near ear results during conditions where an interaural time difference was present. This result was observed more effectively when spondaic words or sentences were used rather than PB words. The implications of this result were related to the interaural time difference and the frequency range of the critical information in the primary message. Although the initial experiments were facilitated by recording through an artificial head, almost identical results were obtained in the final experiment when subjects were tested in the sound field.


1992 ◽  
Vol 68 (6) ◽  
pp. 2063-2076 ◽  
Author(s):  
H. Wagner ◽  
T. Takahashi

1. We studied the sensitivity of auditory neurons in the barn owl's brain stem to the direction of apparent acoustic motion. Motion stimuli were generated with an array of seven free-field speakers (Fig. 2). Motion-direction sensitivity was determined by comparing the number of spikes evoked by counterclockwise (CCW) motion with the number of spikes evoked by clockwise (CW) motion. A directionality index (DI) was defined to quantify the measurements. The statistical significance of the directional bias was determined by a chi 2 test that used the responses to stationary sounds as the null hypothesis. 2. During the search for acoustic neurons, dichotic stimuli were presented via earphones, and the sensitivity of the units for interaural time difference (ITD), interaural level difference (ILD), and frequency was measured. After a unit had been isolated, its response to moving and stationary free-field stimuli was recorded. Most of the neurons that responded to dichotic stimulation responded also to free-field stimulation. At 61 of the 211 recording sites, the response was motion-direction sensitive. 3. The spontaneous activity of all neurons was low, so that some 95% of the recorded activity was due to an excitation caused by the stimuli. 4. Neurons sensitive to the direction of motion were found in many nuclei of the auditory pathway such as the nuclei of the lateral lemniscus, the subnuclei of the inferior colliculus (IC), and the optic tectum (OT) (Figs. 3 and 5-8, Table 1). 5. In 61% of the motion-direction-sensitive neurons, the response to motion in the preferred direction was equal to the response to stationary sounds, whereas in 75% of the neurons, the response to motion in the null direction was lower than the response to stationary sounds (Table 2, Fig. 6). This observation suggested a null-direction inhibition as one important factor of generating motion-direction sensitivity. 6. Neurons having a high motion-direction sensitivity usually responded phasically, whereas tonically active neurons exhibited a low motion-direction sensitivity (Fig. 9). 7. Velocity tuning was broad (Fig. 7). A shallow peak appeared around 310 degrees/s within the range tested (125-1,200 degrees/s, 33 cells). 8. A silent gap between the bursts from successive speakers caused a decrease in motion-direction sensitivity. This decrease was linear with gap duration and depended on the apparent velocity (Figs. 10-13).(ABSTRACT TRUNCATED AT 400 WORDS)


2019 ◽  
Vol 122 (2) ◽  
pp. 737-748 ◽  
Author(s):  
Erol J. Ozmeral ◽  
David A. Eddins ◽  
Ann Clock Eddins

Cortical encoding of auditory space relies on two major peripheral cues, interaural time difference (ITD) and interaural level difference (ILD) of the sounds arriving at a listener’s ears. In much of the precortical auditory pathway, ITD and ILD cues are processed independently, and it is assumed that cue integration is a higher order process. However, there remains debate on how ITDs and ILDs are encoded in the cortex and whether they share a common mechanism. The present study used electroencephalography (EEG) to measure evoked cortical potentials from narrowband noise stimuli with imposed binaural cue changes. Previous studies have similarly tested ITD shifts to demonstrate that neural populations broadly favor one spatial hemifield over the other, which is consistent with an opponent-channel model that computes the relative activity between broadly tuned neural populations. However, it is still a matter of debate whether the same coding scheme applies to ILDs and, if so, whether processing the two binaural cues is distributed across similar regions of the cortex. The results indicate that ITD and ILD cues have similar neural signatures with respect to the monotonic responses to shift magnitude; however, the direction of the shift did not elicit responses equally across cues. Specifically, ITD shifts evoked greater responses for outward than inward shifts, independently of the spatial hemifield of the shift, whereas ILD-shift responses were dependent on the hemifield in which the shift occurred. Active cortical structures showed only minor overlap between responses to cues, suggesting the two are not represented by the same pathway. NEW & NOTEWORTHY Interaural time differences (ITDs) and interaural level differences (ILDs) are critical to locating auditory sources in the horizontal plane. The higher order perceptual feature of auditory space is thought to be encoded together by these binaural differences, yet evidence of their integration in cortex remains elusive. Although present results show some common effects between the two cues, key differences were observed that are not consistent with an ITD-like opponent-channel process for ILD encoding.


1999 ◽  
Vol 82 (1) ◽  
pp. 164-175 ◽  
Author(s):  
Kevin A. Davis ◽  
Ramnarayan Ramachandran ◽  
Bradford J. May

Single units in the central nucleus of the inferior colliculus (ICC) of unanesthetized decerebrate cats can be grouped into three distinct types (V, I, and O) according to the patterns of excitation and inhibition revealed in contralateral frequency response maps. This study extends the description of these response types by assessing their ipsilateral and binaural response map properties. Here the nature of ipsilateral inputs is evaluated directly using frequency response maps and compared with results obtained from methods that rely on sensitivity to interaural level differences (ILDs). In general, there is a one-to-one correspondence between observed ipsilateral input characteristics and those inferred from ILD manipulations. Type V units receive ipsilateral excitation and show binaural facilitation (EE properties); type I and type O units receive ipsilateral inhibition and show binaural excitatory/inhibitory (EI) interactions. Analyses of binaural frequency response maps show that these ILD effects extend over the entire receptive field of ICC units. Thus the range of frequencies that elicits excitation from type V units is expanded with increasing levels of ipsilateral stimulation, whereas the excitatory bandwidth of type I and O units decreases under the same binaural conditions. For the majority of ICC units, application of bicuculline, an antagonist for GABAA-mediated inhibition, does not alter the basic effects of binaural stimulation; rather, it primarily increases spontaneous and maximum discharge rates. These results support our previous interpretations of the putative dominant inputs to ICC response types and have important implications for midbrain processing of competing free-field sounds that reach the listener with different directional signatures.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Antje Ihlefeld ◽  
Nima Alamatsaz ◽  
Robert M Shapley

Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.


2012 ◽  
Vol 25 (0) ◽  
pp. 113
Author(s):  
Flavia Cardini ◽  
Patrick Haggard ◽  
Elisabetta Ladavas

In the Visual Enhancement of Touch (VET), simply viewing one’s hand improves tactile spatial perception, even though vision is non-informative. While previous studies had suggested that looking at another person’s hand could also enhance tactile perception, no previous study had systematically investigated the differences between viewing one’s body and someone else’s. The aim of this study was to shed light on the relation between visuo–tactile interactions and the self-other distinction. In Experiment 1 we manipulated the spatial location where a hand was seen. Viewing one’s hand enhanced tactile acuity relative to viewing a neutral object, but only when the image of the hand was spatially aligned with the actual location of the participant’s unseen hand. The VET effect did not occur when one’s hand was viewed at a location other than that experienced proprioceptively. In contrast, viewing another’s hand produced enhanced tactile perception irrespective of spatial location. In Experiment 2, we used a multisensory stimulation technique, known as Visual Remapping of Touch, to reduce perceived spatial misalignment of vision and touch. When participants saw an image of their own hand being touched at the same time as the tactile stimulation, the reduction in perceived misalignment caused VET effect to return, even though the spatial location of the images was not consistent with the actual body posture. Our results suggest that multisensory modulation of touch depends on a representation of one’s body that is fundamentally spatial in nature. In contrast, representation of others is free from this spatial constraint.


1999 ◽  
Vol 09 (05) ◽  
pp. 441-446 ◽  
Author(s):  
ANDRÉ VAN SCHAIK ◽  
CRAIG JIN ◽  
SIMON CARLILE

In this work we study the influence and relationship of five different acoustical cues to the human sound localisation process. These cues are: interaural time delay, interaural level difference, interaural spectrum, monaural spectrum, and band-edge spectral contrast. Of particular interest was the synthesis and integration of the different cues to produce a coherent and robust percept of spatial location. The relative weighting and role of the different cues was investigated using band-pass filtered white noise with a frequency range (in kHz) of: 0.3–5, 0.3–7, 0.3–10, 0.3–14, 3–8, 4–9, and 7-14. These stimuli provided varying amounts of spectral information and physiologically detectable temporal information, thus probing the localisation process under varying sound conditions. Three subjects with normal hearing in both ears have performed five trials of 76 test positions for each of these stimuli in an anechoic room. All subjects showed systematic mislocalisation on most of these stimuli. The location to which they are mislocalised varies among subjects but in a systematic manner related to the five different acoustical cues. These cues have been correlated with the subject's localisation responses on an individual basis with the results suggesting that the internal weighting of the spectral cues may vary with the sound condition.


2005 ◽  
Vol 93 (6) ◽  
pp. 3390-3400 ◽  
Author(s):  
W. R. D’Angelo ◽  
S. J. Sterbing ◽  
E.-M. Ostapoff ◽  
S. Kuwada

A major cue for the localization of sound in space is the interaural time difference (ITD). We examined the role of inhibition in the shaping of ITD responses in the inferior colliculus (IC) by iontophoretically ejecting γ-aminobutyric acid (GABA) antagonists and GABA itself using a multibarrel pipette. The GABA antagonists block inhibition, whereas the applied GABA provides a constant level of inhibition. The effects on ITD responses were evaluated before, during and after the application of the drugs. If GABA-mediated inhibition is involved in shaping ITD tuning in IC neurons, then applying additional amounts of this inhibitory transmitter should alter ITD tuning. Indeed, for almost all neurons tested, applying GABA reduced the firing rate and consequently sharpened ITD tuning. Conversely, blocking GABA-mediated inhibition increased the activity of IC neurons, often reduced the signal-to-noise ratio and often broadened ITD tuning. Blocking GABA could also alter the shape of the ITD function and shift its peak suggesting that the role of inhibition is multifaceted. These effects indicate that GABAergic inhibition at the level of the IC is important for ITD coding.


Sign in / Sign up

Export Citation Format

Share Document