Acuity of Sound Localisation: A Topography of Auditory Space. II. Pinna Cues Absent

Perception ◽  
1984 ◽  
Vol 13 (5) ◽  
pp. 601-617 ◽  
Author(s):  
Simon R Oldfield ◽  
Simon P A Parker

The acuity of azimuth and elevation discrimination was measured under conditions in which the cues to localisation provided by the pinnae were removed. Four subjects localised a sound source (white noise through a speaker) which varied in position over a range of elevations (-40° to +40°) and azimuths (0° to 180°), at 10° intervals, on the left side of the head. Pinna cues were removed by the insertion of individually cast moulds in both pinnae. Each mould had an access hole to the auditory canal. The absolute and algebraic, azimuth and elevation errors were measured for all subjects at each position of the source. The variability of azimuth and elevation error was also computed. The performance of the subjects was compared to their performance under normal hearing conditions. Insertion of the pinnae moulds was found to increase substantially elevation error and the number of front/back reversals. The importance of the cues provided by the pinnae in these discriminations was thus confirmed. However, the increase in elevation error did not result in a corresponding increase in azimuth error. These findings provide support for the proposition that azimuth and elevation discrimination are coded independently.

Perception ◽  
1984 ◽  
Vol 13 (5) ◽  
pp. 581-600 ◽  
Author(s):  
Simon R Oldfield ◽  
Simon P A Parker

Eight subjects were required to localise a sound source (white noise through a speaker) which varied in position on both sides of the head over a range of elevations (-40° to +40°) and azimuths (0° to 180°) at 10° intervals. The perceived position of the source was indicated by pointing a special gun. Depression of the trigger activated a photographic system which recorded two views of the subject, the sound source, and the gun. The absolute and algebraic, azimuth and elevation errors were measured for all subjects at each position of the source. The variability of azimuth and elevation error was also computed. In a second experiment, four of the same subjects performed the same task but in this case visually located the sources. This experiment provided an estimate of inherent motor error in the pointing task. No differences in localisation acuity between sides were found, but there were significant differences between front and back regions. Azimuth and elevation error were well matched and low in the front. However, azimuth error increased in the regions behind the head, particularly for azimuth positions 120° to 160°. Larger increases were found for positions in the upper elevations of this region. Elevation error also increased in the upper elevations behind the head. A comparison of the auditory and visual data indicates that this pattern of error is not due to motor factors. The results are discussed in relation to the structural characteristics of the pinnae and modifications that they impose on incoming sound energy.


2002 ◽  
Vol 87 (4) ◽  
pp. 1749-1762 ◽  
Author(s):  
Shigeto Furukawa ◽  
John C. Middlebrooks

Previous studies have demonstrated that the spike patterns of cortical neurons vary systematically as a function of sound-source location such that the response of a single neuron can signal the location of a sound source throughout 360° of azimuth. The present study examined specific features of spike patterns that might transmit information related to sound-source location. Analysis was based on responses of well-isolated single units recorded from cortical area A2 in α-chloralose-anesthetized cats. Stimuli were 80-ms noise bursts presented from loudspeakers in the horizontal plane; source azimuths ranged through 360° in 20° steps. Spike patterns were averaged across samples of eight trials. A competitive artificial neural network (ANN) identified sound-source locations by recognizing spike patterns; the ANN was trained using the learning vector quantization learning rule. The information about stimulus location that was transmitted by spike patterns was computed from joint stimulus-response probability matrices. Spike patterns were manipulated in various ways to isolate particular features. Full-spike patterns, which contained all spike-count information and spike timing with 100-μs precision, transmitted the most stimulus-related information. Transmitted information was sensitive to disruption of spike timing on a scale of more than ∼4 ms and was reduced by an average of ∼35% when spike-timing information was obliterated entirely. In a condition in which all but the first spike in each pattern were eliminated, transmitted information decreased by an average of only ∼11%. In many cases, that condition showed essentially no loss of transmitted information. Three unidimensional features were extracted from spike patterns. Of those features, spike latency transmitted ∼60% more information than that transmitted either by spike count or by a measure of latency dispersion. Information transmission by spike patterns recorded on single trials was substantially reduced compared with the information transmitted by averages of eight trials. In a comparison of averaged and nonaveraged responses, however, the information transmitted by latencies was reduced by only ∼29%, whereas information transmitted by spike counts was reduced by 79%. Spike counts clearly are sensitive to sound-source location and could transmit information about sound-source locations. Nevertheless, the present results demonstrate that the timing of the first poststimulus spike carries a substantial amount, probably the majority, of the location-related information present in spike patterns. The results indicate that any complete model of the cortical representation of auditory space must incorporate the temporal characteristics of neuronal response patterns.


2016 ◽  
Vol 41 (3) ◽  
pp. 437-447
Author(s):  
Dominik Storek ◽  
Frantisek Rund ◽  
Petr Marsalek

Abstract This paper analyses the performance of Differential Head-Related Transfer Function (DHRTF), an alternative transfer function for headphone-based virtual sound source positioning within a horizontal plane. This experimental one-channel function is used to reduce processing and avoid timbre affection while preserving signal features important for sound localisation. The use of positioning algorithm employing the DHRTF is compared to two other common positioning methods: amplitude panning and HRTF processing. Results of theoretical comparison and quality assessment of the methods by subjective listening tests are presented. The tests focus on distinctive aspects of the positioning methods: spatial impression, timbre affection, and loudness fluctuations. The results show that the DHRTF positioning method is applicable with very promising performance; it avoids perceptible channel coloration that occurs within the HRTF method, and it delivers spatial impression more successfully than the simple amplitude panning method.


1979 ◽  
Vol 44 (3) ◽  
pp. 354-362 ◽  
Author(s):  
Jeffrey L. Danhauer ◽  
Jonathan G. Leppler

Thirty-five normal-hearing listeners' speech discrimination scores were obtained for the California Consonant Test (CCT) in four noise competitors: (1) a four-talker complex (FT), (2) a nine-talker complex developed at Bowling Green State University (BGMTN), (3) cocktail party noise (CPN), and (4) white noise (WN). Five listeners received the CCT stimuli mixed ipsilaterally with each of the competing noises at one of seven different signal-to-noise ratios (S/Ns). Articulation functions were plotted for each noise competitor. Statistical analysis revealed that the noise types produced few differences on the CCT scores over most of the S/Ns tested, but that noise competitors similar to peripheral maskers (CPN and WN) had less effect on the scores at more severe levels than competitors more similar to perceptual maskers (FT and BGMTN). Results suggest that the CCT should be sufficiently difficult even without the presence of a noise competitor for normal-hearing listeners in many audiologic testing situations. Levels that should approximate CCT maximum discrimination (D-Max) scores for normal listeners are suggested for use when clinic time does not permit the establishment of articulation functions. The clinician should determine the S/N of the CCT tape itself before establishing listening levels.


2013 ◽  
Vol 133 (5) ◽  
pp. 2876-2882 ◽  
Author(s):  
William A. Yost ◽  
Louise Loiselle ◽  
Michael Dorman ◽  
Jason Burns ◽  
Christopher A. Brown

2001 ◽  
Vol 21 (12) ◽  
pp. 4408-4415 ◽  
Author(s):  
Rick L. Jenison ◽  
Jan W. H. Schnupp ◽  
Richard A. Reale ◽  
John F. Brugge

2015 ◽  
Vol 20 (3) ◽  
pp. 183-188 ◽  
Author(s):  
Michael F. Dorman ◽  
Daniel Zeitler ◽  
Sarah J. Cook ◽  
Louise Loiselle ◽  
William A. Yost ◽  
...  

In this report, we used filtered noise bands to constrain listeners' access to interaural level differences (ILDs) and interaural time differences (ITDs) in a sound source localization task. The samples of interest were listeners with single-sided deafness (SSD) who had been fit with a cochlear implant in the deafened ear (SSD-CI). The comparison samples included listeners with normal hearing and bimodal hearing, i.e. with a cochlear implant in 1 ear and low-frequency acoustic hearing in the other ear. The results indicated that (i) sound source localization was better in the SSD-CI condition than in the SSD condition, (ii) SSD-CI patients rely on ILD cues for sound source localization, (iii) SSD-CI patients show functional localization abilities within 1-3 months after device activation and (iv) SSD-CI patients show better sound source localization than bimodal CI patients but, on average, poorer localization than normal-hearing listeners. One SSD-CI patient showed a level of localization within normal limits. We provide an account for the relative localization abilities of the groups by reference to the differences in access to ILD cues.


2016 ◽  
Vol 115 (4) ◽  
pp. 2237-2245 ◽  
Author(s):  
Hannah M. Krüger ◽  
Thérèse Collins ◽  
Bernhard Englitz ◽  
Patrick Cavanagh

Orienting our eyes to a light, a sound, or a touch occurs effortlessly, despite the fact that sound and touch have to be converted from head- and body-based coordinates to eye-based coordinates to do so. We asked whether the oculomotor representation is also used for localization of sounds even when there is no saccade to the sound source. To address this, we examined whether saccades introduced similar errors of localization judgments for both visual and auditory stimuli. Sixteen subjects indicated the direction of a visual or auditory apparent motion seen or heard between two targets presented either during fixation or straddling a saccade. Compared with the fixation baseline, saccades introduced errors in direction judgments for both visual and auditory stimuli: in both cases, apparent motion judgments were biased in direction of the saccade. These saccade-induced effects across modalities give rise to the possibility of shared, cross-modal location coding for perception and action.


2015 ◽  
Vol 20 (Suppl. 1) ◽  
pp. 31-37 ◽  
Author(s):  
Ruth M. Reeder ◽  
Jamie Cadieux ◽  
Jill B. Firszt

The study objective was to quantify abilities of children with unilateral hearing loss (UHL) on measures that address known deficits for this population, i.e. speech understanding in quiet and noise, and sound localisation. Noise conditions varied by noise type and source location. Parent reports of real-world abilities were also obtained. Performance was compared to gender- and age-matched normal hearing (NH) peers. UHL performance was poorer and more varied compared to NH peers. Among the findings, age correlated with localisation ability for UHL but not NH participants. Low-frequency hearing in the better ear of UHL children was associated with performance in noise; however, there was no relation for NH children. Considerable variability was evident in the outcomes of children with UHL and needs to be understood as future treatment options are considered.


Sign in / Sign up

Export Citation Format

Share Document