scholarly journals Sound frequency-invariant neural coding of a frequency-dependent cue to sound source location

2015 ◽  
Vol 114 (1) ◽  
pp. 531-539 ◽  
Author(s):  
Heath G. Jones ◽  
Andrew D. Brown ◽  
Kanthaiah Koka ◽  
Jennifer L. Thornton ◽  
Daniel J. Tollin

The century-old duplex theory of sound localization posits that low- and high-frequency sounds are localized with two different acoustical cues, interaural time and level differences (ITDs and ILDs), respectively. While behavioral studies in humans and behavioral and neurophysiological studies in a variety of animal models have largely supported the duplex theory, behavioral sensitivity to ILD is curiously invariant across the audible spectrum. Here we demonstrate that auditory midbrain neurons in the chinchilla ( Chinchilla lanigera) also encode ILDs in a frequency-invariant manner, efficiently representing the full range of acoustical ILDs experienced as a joint function of sound source frequency, azimuth, and distance. We further show, using Fisher information, that nominal “low-frequency” and “high-frequency” ILD-sensitive neural populations can discriminate ILD with similar acuity, yielding neural ILD discrimination thresholds for near-midline sources comparable to behavioral discrimination thresholds estimated for chinchillas. These findings thus suggest a revision to the duplex theory and reinforce ecological and efficiency principles that hold that neural systems have evolved to encode the spectrum of biologically relevant sensory signals to which they are naturally exposed.

2020 ◽  
Vol 10 (18) ◽  
pp. 6356
Author(s):  
Sina Mojtahedi ◽  
Engin Erzin ◽  
Pekcan Ungan

A sound source with non-zero azimuth leads to interaural time level differences (ITD and ILD). Studies on hearing system imply that these cues are encoded in different parts of the brain, but combined to produce a single lateralization percept as evidenced by experiments indicating trading between them. According to the duplex theory of sound lateralization, ITD and ILD play a more significant role in low-frequency and high-frequency stimulations, respectively. In this study, ITD and ILD, which were extracted from a generic head-related transfer functions, were imposed on a complex sound consisting of two low- and seven high-frequency tones. Two-alternative forced-choice behavioral tests were employed to assess the accuracy in identifying a change in lateralization. Based on a diversity combination model and using the error rate data obtained from the tests, the weights of the ITD and ILD cues in their integration were determined by incorporating a bias observed for inward shifts. The weights of the two cues were found to change with the azimuth of the sound source. While the ILD appears to be the optimal cue for the azimuths near the midline, the ITD and ILD weights turn to be balanced for the azimuths far from the midline.


2017 ◽  
Vol 114 (5) ◽  
pp. 1201-1206 ◽  
Author(s):  
Magdalena Wojtczak ◽  
Anahita H. Mehta ◽  
Andrew J. Oxenham

In modern Western music, melody is commonly conveyed by pitch changes in the highest-register voice, whereas meter or rhythm is often carried by instruments with lower pitches. An intriguing and recently suggested possibility is that the custom of assigning rhythmic functions to lower-pitch instruments may have emerged because of fundamental properties of the auditory system that result in superior time encoding for low pitches. Here we compare rhythm and synchrony perception between low- and high-frequency tones, using both behavioral and EEG techniques. Both methods were consistent in showing no superiority in time encoding for low over high frequencies. However, listeners were consistently more sensitive to timing differences between two nearly synchronous tones when the high-frequency tone followed the low-frequency tone than vice versa. The results demonstrate no superiority of low frequencies in timing judgments but reveal a robust asymmetry in the perception and neural coding of synchrony that reflects greater tolerance for delays of low- relative to high-frequency sounds than vice versa. We propose that this asymmetry exists to compensate for inherent and variable time delays in cochlear processing, as well as the acoustical properties of sound sources in the natural environment, thereby providing veridical perceptual experiences of simultaneity.


Loquens ◽  
2019 ◽  
Vol 5 (2) ◽  
pp. 054
Author(s):  
María Cuesta ◽  
Pedro Cobo

Although tinnitus, the conscious perception of a sound without a sound source external or internal to the body, is highly correlated with hearing loss, the precise nature of such correlation remains still unknown. People with high pitch tinnitus are used to suffer from high frequency hearing losses, and vice versa, low pitch tinnitus is mostly associated with low frequency hearing losses. However, many subjects with low or high frequency losses do no develop tinnitus. Thus, studies trying to relate audiometric characteristics and tinnitus features are still relevant. This article presents a correlational study of audiometric and tinnitus variables in a sample of 34 subjects, paying special attention to the heterogeneous subtypes of both audiometry shape and tinnitus etiology. Our results, which concur with others previously published, demonstrate that the tinnitus pitch, the main frequency of the tinnitus spectrum, in subjects with high-steep high-frequency and continuously steep hearing losses, are highly correlated with the frequency at which hearing loss reaches 50 dB HL.


2020 ◽  
Author(s):  
Bo Yeong Won ◽  
Martha Forloines ◽  
Zhiheng Zhou ◽  
Joy Geng

The ability to suppress distractions is essential to successful completion of goal-directed behaviors. Several behavioral studies have recently provided strong evidence that learned suppression may be particularly efficient in reducing distractor interference. Expectations about a distractor’s repeated location, color, or even presence is rapidly learned and used to attenuate interference. In this study, we use a visual search paradigm in which a color singleton, which is known to capture attention, occurs within blocks with high or low frequency. The behavioral results show reduced singleton interference during the high compared to the low frequency block (Won et al., 2019). The fMRI results provide evidence that the attenuation of distractor interference is supported by changes in singleton, target, and non-salient distractor representations within retinotopic visual cortex. These changes in visual cortex are accompanied by findings that singleton-present trials compared to non-singleton trials produce greater activation in bilateral parietal cortex, indicative of attentional capture, in low frequency, but not high frequency blocks. Together, these results suggest that the readout of saliency signals associated with an expected color singleton from visual cortex is suppressed, resulting in less competition for attentional priority in frontoparietal attentional control regions.


2011 ◽  
Vol 106 (5) ◽  
pp. 2698-2708 ◽  
Author(s):  
Shigeyuki Kuwada ◽  
Brian Bishop ◽  
Caitlin Alex ◽  
Daniel W. Condit ◽  
Duck O. Kim

Despite decades of research devoted to the study of inferior colliculus (IC) neurons' tuning to sound-source azimuth, there remain many unanswered questions because no previous study has examined azimuth tuning over a full range of 360° azimuths at a wide range of stimulus levels in an unanesthetized preparation. Furthermore, a comparison of azimuth tuning to binaural and contralateral ear stimulation over ranges of full azimuths and widely varying stimulus levels has not previously been reported. To fill this void, we have conducted a study of azimuth tuning in the IC of the unanesthetized rabbit over a 300° range of azimuths at stimulus levels of 10–50 dB above neural threshold to both binaural and contralateral ear stimulation using virtual auditory space stimuli. This study provides systematic evidence for neural coding of azimuth. We found the following: 1) level-tolerant azimuth tuning was observed in the top 35% regarding vector strength and in the top 15% regarding vector angle of IC neurons; 2) preserved azimuth tuning to binaural stimulation at high stimulus levels was created as a consequence of binaural facilitation in the contralateral sound field and binaural suppression in the ipsilateral sound field; 3) the direction of azimuth tuning to binaural stimulation was primarily in the contralateral sound field, and its center shifted laterally toward −90° with increasing stimulus level; 4) at 10 dB, azimuth tuning to binaural and contralateral stimulation was similar, indicating that it was mediated by monaural mechanisms; and 5) at higher stimulus levels, azimuth tuning to contralateral ear stimulation was severely degraded. These findings form a foundation for understanding neural mechanisms of localizing sound-source azimuth.


2021 ◽  
Author(s):  
Iwona Kudłacik ◽  
Jan Kapłon

<p>High-rate GNSS (HR-GNSS) observations are used for high-precision applications, where the point position changes in short intervals are required, such as earthquake analysis or structural health monitoring. We aim to apply the HR-GNSS observations into mining tremors monitoring, where the dynamic displacement amplitudes reach maximally dozens of millimetres. The study contains the analysis of several mining tremors of magnitudes 3-4 in Poland, recorded within the EPOS-PL project.</p><p>The HR-GNSS position is obtained with over 1 Hz frequency in kinematic mode with relative or absolute approaches. For short periods (up to several minutes), the positioning accuracy is very high, but the displacement time series suffer from low-frequency fluctuations. Therefore, it is not possible to apply them directly in the analysis of seismic phenomena, thus it is necessary to filter out low- and high-frequency noise.</p><p>In this study, we discussed some methods that are useful to reduce the noise in HR-GNSS displacement time series to obtain precise and physically correct results with reference to seismological observations, which for dynamic position changes are an order of magnitude more accurate. We presented the band-pass filtering application with automatic filtration limits based on occupied bandwidth detection and the discrete wavelet transform application with multiresolution analysis. The correction of noise increases the correlation coefficient by over 40%, reaching values over 0.8. Moreover, we tested the application of the basic Kalman filter to the integration of sensors: HR-GNSS and an accelerometer to visualize the most actual displacements of the station during a small earthquake - a mining tremor. The usefulness of this algorithm for the assumed purpose was confirmed. This algorithm allows to reduce the noise from HR-GNSS results, and on the other hand, to minimize the potential seismograph drift and its errors caused by the limited dynamic range of the seismograph. An unquestionable advantage is the possibility of obtaining a time series of displacements with a high frequency (equal to the frequency of seismograph observations, e.g. 250 Hz) showing the full range of station motion: dynamic and static displacements caused by an earthquake.</p>


2014 ◽  
Vol 111 (2) ◽  
pp. 300-312 ◽  
Author(s):  
Mark M. G. Walton ◽  
Edward G. Freedman

Primates explore a visual scene through a succession of saccades. Much of what is known about the neural circuitry that generates these movements has come from neurophysiological studies using subjects with their heads restrained. Horizontal saccades and the horizontal components of oblique saccades are associated with high-frequency bursts of spikes in medium-lead burst neurons (MLBs) and long-lead burst neurons (LLBNs) in the paramedian pontine reticular formation. For LLBNs, the high-frequency burst is preceded by a low-frequency prelude that begins 12–150 ms before saccade onset. In terms of the lead time between the onset of prelude activity and saccade onset, the anatomical projections, and the movement field characteristics, LLBNs are a heterogeneous group of neurons. Whether this heterogeneity is endemic of multiple functional subclasses is an open question. One possibility is that some may carry signals related to head movement. We recorded from LLBNs while monkeys performed head-unrestrained gaze shifts, during which the kinematics of the eye and head components were dissociable. Many cells had peak firing rates that never exceeded 200 spikes/s for gaze shifts of any vector. The activity of these low-frequency cells often persisted beyond the end of the gaze shift and was usually related to head-movement kinematics. A subset was tested during head-unrestrained pursuit and showed clear modulation in the absence of saccades. These “low-frequency” cells were intermingled with MLBs and traditional LLBNs and may represent a separate functional class carrying signals related to head movement.


Author(s):  
G. Y. Fan ◽  
J. M. Cowley

It is well known that the structure information on the specimen is not always faithfully transferred through the electron microscope. Firstly, the spatial frequency spectrum is modulated by the transfer function (TF) at the focal plane. Secondly, the spectrum suffers high frequency cut-off by the aperture (or effectively damping terms such as chromatic aberration). While these do not have essential effect on imaging crystal periodicity as long as the low order Bragg spots are inside the aperture, although the contrast may be reversed, they may change the appearance of images of amorphous materials completely. Because the spectrum of amorphous materials is continuous, modulation of it emphasizes some components while weakening others. Especially the cut-off of high frequency components, which contribute to amorphous image just as strongly as low frequency components can have a fundamental effect. This can be illustrated through computer simulation. Imaging of a whitenoise object with an electron microscope without TF limitation gives Fig. 1a, which is obtained by Fourier transformation of a constant amplitude combined with random phases generated by computer.


Author(s):  
M. T. Postek ◽  
A. E. Vladar

Fully automated or semi-automated scanning electron microscopes (SEM) are now commonly used in semiconductor production and other forms of manufacturing. The industry requires that an automated instrument must be routinely capable of 5 nm resolution (or better) at 1.0 kV accelerating voltage for the measurement of nominal 0.25-0.35 micrometer semiconductor critical dimensions. Testing and proving that the instrument is performing at this level on a day-by-day basis is an industry need and concern which has been the object of a study at NIST and the fundamentals and results are discussed in this paper.In scanning electron microscopy, two of the most important instrument parameters are the size and shape of the primary electron beam and any image taken in a scanning electron microscope is the result of the sample and electron probe interaction. The low frequency changes in the video signal, collected from the sample, contains information about the larger features and the high frequency changes carry information of finer details. The sharper the image, the larger the number of high frequency components making up that image. Fast Fourier Transform (FFT) analysis of an SEM image can be employed to provide qualitiative and ultimately quantitative information regarding the SEM image quality.


1992 ◽  
Vol 1 (4) ◽  
pp. 52-55 ◽  
Author(s):  
Gail L. MacLean ◽  
Andrew Stuart ◽  
Robert Stenstrom

Differences in real ear sound pressure levels (SPLs) with three portable stereo system (PSS) earphones (supraaural [Sony Model MDR-44], semiaural [Sony Model MDR-A15L], and insert [Sony Model MDR-E225]) were investigated. Twelve adult men served as subjects. Frequency response, high frequency average (HFA) output, peak output, peak output frequency, and overall RMS output for each PSS earphone were obtained with a probe tube microphone system (Fonix 6500 Hearing Aid Test System). Results indicated a significant difference in mean RMS outputs with nonsignificant differences in mean HFA outputs, peak outputs, and peak output frequencies among PSS earphones. Differences in mean overall RMS outputs were attributed to differences in low-frequency effects that were observed among the frequency responses of the three PSS earphones. It is suggested that one cannot assume equivalent real ear SPLs, with equivalent inputs, among different styles of PSS earphones.


Sign in / Sign up

Export Citation Format

Share Document