Auditory cortex lesions and interaural intensity and phase-angle discrimination in cats

1979 ◽  
Vol 42 (6) ◽  
pp. 1518-1526 ◽  
Author(s):  
J. L. Cranford

1. A currently unresolved question concerning the effects of auditory decortication on sound localization is whether or not operated animals have a normal capacity for discriminating the small interaural differences in phase angle or intensity that result from the spatial separation of sound sources relative to the head. The present experiment was designed to provide data relevant to this question. 2. Four normal and three operated cats (bilateral ablations of AI, AII Ep, SII, I-T), wearing stereo headsets, were tested with an active avoidance procedure to detect reversals in the interaural phase-angle or intensity relations of binaural 1-kHz tones. For both groups of cats, the detection thresholds for interaural intensity and phase angle were found to be close to 1 dB and 5 degrees, respectively. 3. In addition, we found that both unoperated and operated cats exhibited positive transfer from the original lateralization task involving the detection of interaural reversals of phase angle or intensity to a new test, which required the cats to identify, in an absolute sense, which ear received the leading or louder signals. 4. Thus, the present investigation provides additional evidence that the neocortex has no primary sensory role in sound localization.

2021 ◽  
Author(s):  
Guus C. van Bentum ◽  
John Van Opstal ◽  
Marc Mathijs van Wanrooij

Sound localization and identification are challenging in acoustically rich environments. The relation between these two processes is still poorly understood. As natural sound-sources rarely occur exactly simultaneously, we wondered whether the auditory system could identify ('what') and localize ('where') two spatially separated sounds with synchronous onsets. While listeners typically report hearing a single source at an average location, one study found that both sounds may be accurately localized if listeners are explicitly being told two sources exist. We here tested whether simultaneous source identification (one vs. two) and localization is possible, by letting listeners choose to make either one or two head-orienting saccades to the perceived location(s). Results show that listeners could identify two sounds only when presented on different sides of the head, and that identification accuracy increased with their spatial separation. Notably, listeners were unable to accurately localize either sound, irrespective of whether one or two sounds were identified. Instead, the first (or only) response always landed near the average location, while second responses were unrelated to the targets. We conclude that localization of synchronous sounds in the absence of prior information is impossible. We discuss that the putative cortical 'what' pathway may not transmit relevant information to the 'where' pathway. We examine how a broadband interaural correlation cue could help to correctly identify the presence of two sounds without being able to localize them. We propose that the persistent averaging behavior reveals that the 'where' system intrinsically assumes that synchronous sounds originate from a single source.


2019 ◽  
Vol 62 (3) ◽  
pp. 745-757 ◽  
Author(s):  
Jessica M. Wess ◽  
Joshua G. W. Bernstein

PurposeFor listeners with single-sided deafness, a cochlear implant (CI) can improve speech understanding by giving the listener access to the ear with the better target-to-masker ratio (TMR; head shadow) or by providing interaural difference cues to facilitate the perceptual separation of concurrent talkers (squelch). CI simulations presented to listeners with normal hearing examined how these benefits could be affected by interaural differences in loudness growth in a speech-on-speech masking task.MethodExperiment 1 examined a target–masker spatial configuration where the vocoded ear had a poorer TMR than the nonvocoded ear. Experiment 2 examined the reverse configuration. Generic head-related transfer functions simulated free-field listening. Compression or expansion was applied independently to each vocoder channel (power-law exponents: 0.25, 0.5, 1, 1.5, or 2).ResultsCompression reduced the benefit provided by the vocoder ear in both experiments. There was some evidence that expansion increased squelch in Experiment 1 but reduced the benefit in Experiment 2 where the vocoder ear provided a combination of head-shadow and squelch benefits.ConclusionsThe effects of compression and expansion are interpreted in terms of envelope distortion and changes in the vocoded-ear TMR (for head shadow) or changes in perceived target–masker spatial separation (for squelch). The compression parameter is a candidate for clinical optimization to improve single-sided deafness CI outcomes.


Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


2011 ◽  
Vol 7 (6) ◽  
pp. 836-839 ◽  
Author(s):  
Josefin Starkhammar ◽  
Patrick W. Moore ◽  
Lois Talmadge ◽  
Dorian S. Houser

Recent recordings of dolphin echolocation using a dense array of hydrophones suggest that the echolocation beam is dynamic and can at times consist of a single dominant peak, while at other times it consists of forward projected primary and secondary peaks with similar energy, partially overlapping in space and frequency bandwidth. The spatial separation of the peaks provides an area in front of the dolphin, where the spectral magnitude slopes drop off quickly for certain frequency bands. This region is potentially used to optimize prey localization by directing the maximum pressure slope of the echolocation beam at the target, rather than the maximum pressure peak. The dolphin was able to steer the beam horizontally to a greater extent than previously described. The complex and dynamic sound field generated by the echolocating dolphin may be due to the use of two sets of phonic lips as sound sources, or an unknown complexity in the sound propagation paths or acoustic properties of the forehead tissues of the dolphin.


Acta Acustica ◽  
2021 ◽  
Vol 5 ◽  
pp. 60
Author(s):  
Mathias Dietz ◽  
Jörg Encke ◽  
Kristin I Bracklo ◽  
Stephan D Ewert

Differences between the interaural phase of a noise and a target tone improve detection thresholds. The maximum masking release is obtained for detecting an antiphasic tone (Sπ) in diotic noise (N0). It has been shown in several studies that this benefit gradually declines as an interaural time delay (ITD) is applied to the noise. This decline has been attributed to the reduced interaural coherence of the noise. Here, we report detection thresholds for a 500 Hz tone in masking noise with ITDs up to 8 ms and bandwidths from 25 to 1000 Hz. Reducing the noise bandwidth from 100 to 50 and 25 Hz increased the masking release for 8-ms ITD, as expected for increasing temporal coherence with decreasing bandwidth. For bandwidths of 100–1000 Hz no significant difference in masking release was observed. Detection thresholds with these wider-band noises had an ITD dependence that is fully described by the temporal coherence imposed by the typical monaurally determined auditory-filter bandwidth. A binaural model based on interaural phase-difference fluctuations accounts for the data without using delay lines.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009569
Author(s):  
Julia C. Gorman ◽  
Oliver L. Tufte ◽  
Anna V. R. Miller ◽  
William M. DeBello ◽  
José L. Peña ◽  
...  

Emergent response properties of sensory neurons depend on circuit connectivity and somatodendritic processing. Neurons of the barn owl’s external nucleus of the inferior colliculus (ICx) display emergence of spatial selectivity. These neurons use interaural time difference (ITD) as a cue for the horizontal direction of sound sources. ITD is detected by upstream brainstem neurons with narrow frequency tuning, resulting in spatially ambiguous responses. This spatial ambiguity is resolved by ICx neurons integrating inputs over frequency, a relevant processing in sound localization across species. Previous models have predicted that ICx neurons function as point neurons that linearly integrate inputs across frequency. However, the complex dendritic trees and spines of ICx neurons raises the question of whether this prediction is accurate. Data from in vivo intracellular recordings of ICx neurons were used to address this question. Results revealed diverse frequency integration properties, where some ICx neurons showed responses consistent with the point neuron hypothesis and others with nonlinear dendritic integration. Modeling showed that varied connectivity patterns and forms of dendritic processing may underlie observed ICx neurons’ frequency integration processing. These results corroborate the ability of neurons with complex dendritic trees to implement diverse linear and nonlinear integration of synaptic inputs, of relevance for adaptive coding and learning, and supporting a fundamental mechanism in sound localization.


Sign in / Sign up

Export Citation Format

Share Document