scholarly journals Selectivity for space and time in early areas of the auditory dorsal stream in the rhesus monkey

2014 ◽  
Vol 111 (8) ◽  
pp. 1671-1685 ◽  
Author(s):  
Paweł Kuśmierek ◽  
Josef P. Rauschecker

The respective roles of ventral and dorsal cortical processing streams are still under discussion in both vision and audition. We characterized neural responses in the caudal auditory belt cortex, an early dorsal stream region of the macaque. We found fast neural responses with elevated temporal precision as well as neurons selective to sound location. These populations were partly segregated: Neurons in a caudomedial area more precisely followed temporal stimulus structure but were less selective to spatial location. Response latencies in this area were even shorter than in primary auditory cortex. Neurons in a caudolateral area showed higher selectivity for sound source azimuth and elevation, but responses were slower and matching to temporal sound structure was poorer. In contrast to the primary area and other regions studied previously, latencies in the caudal belt neurons were not negatively correlated with best frequency. Our results suggest that two functional substreams may exist within the auditory dorsal stream.

2013 ◽  
Vol 25 (2) ◽  
pp. 175-187 ◽  
Author(s):  
Jihoon Oh ◽  
Jae Hyung Kwon ◽  
Po Song Yang ◽  
Jaeseung Jeong

Neural responses in early sensory areas are influenced by top–down processing. In the visual system, early visual areas have been shown to actively participate in top–down processing based on their topographical properties. Although it has been suggested that the auditory cortex is involved in top–down control, functional evidence of topographic modulation is still lacking. Here, we show that mental auditory imagery for familiar melodies induces significant activation in the frequency-responsive areas of the primary auditory cortex (PAC). This activation is related to the characteristics of the imagery: when subjects were asked to imagine high-frequency melodies, we observed increased activation in the high- versus low-frequency response area; when the subjects were asked to imagine low-frequency melodies, the opposite was observed. Furthermore, we found that A1 is more closely related to the observed frequency-related modulation than R in tonotopic subfields of the PAC. Our findings suggest that top–down processing in the auditory cortex relies on a mechanism similar to that used in the perception of external auditory stimuli, which is comparable to early visual systems.


2000 ◽  
Vol 84 (3) ◽  
pp. 1453-1463 ◽  
Author(s):  
Jos J. Eggermont

Responses of single- and multi-units in primary auditory cortex were recorded for gap-in-noise stimuli for different durations of the leading noise burst. Both firing rate and inter-spike interval representations were evaluated. The minimum detectable gap decreased in exponential fashion with the duration of the leading burst to reach an asymptote for durations of 100 ms. Despite the fact that leading and trailing noise bursts had the same frequency content, the dependence on leading burst duration was correlated with psychophysical estimates of across frequency channel (different frequency content of leading and trailing burst) gap thresholds in humans. The duration of the leading burst plus that of the gap was represented in the all-order inter-spike interval histograms for cortical neurons. The recovery functions for cortical neurons could be modeled on basis of fast synaptic depression and after-hyperpolarization produced by the onset response to the leading noise burst. This suggests that the minimum gap representation in the firing pattern of neurons in primary auditory cortex, and minimum gap detection in behavioral tasks is largely determined by properties intrinsic to those, or potentially subcortical, cells.


2009 ◽  
Vol 102 (3) ◽  
pp. 1606-1622 ◽  
Author(s):  
Paweł Kuśmierek ◽  
Josef P. Rauschecker

Responses of neural units in two areas of the medial auditory belt (middle medial area [MM] and rostral medial area [RM]) were tested with tones, noise bursts, monkey calls (MC), and environmental sounds (ES) in microelectrode recordings from two alert rhesus monkeys. For comparison, recordings were also performed from two core areas (primary auditory area [A1] and rostral area [R]) of the auditory cortex. All four fields showed cochleotopic organization, with best (center) frequency [BF(c)] gradients running in opposite directions in A1 and MM than in R and RM. The medial belt was characterized by a stronger preference for band-pass noise than for pure tones found medially to the core areas. Response latencies were shorter for the two more posterior (middle) areas MM and A1 than for the two rostral areas R and RM, reaching values as low as 6 ms for high BF(c) in MM and A1, and strongly depended on BF(c). The medial belt areas exhibited a higher selectivity to all stimuli, in particular to noise bursts, than the core areas. An increased selectivity to tones and noise bursts was also found in the anterior fields; the opposite was true for highly temporally modulated ES. Analysis of the structure of neural responses revealed that neurons were driven by low-level acoustic features in all fields. Thus medial belt areas RM and MM have to be considered early stages of auditory cortical processing. The anteroposterior difference in temporal processing indices suggests that R and RM may belong to a different hierarchical level or a different computational network than A1 and MM.


2020 ◽  
Vol 117 (45) ◽  
pp. 28442-28451
Author(s):  
Monzilur Rahman ◽  
Ben D. B. Willmore ◽  
Andrew J. King ◽  
Nicol S. Harper

Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.


2013 ◽  
Vol 110 (9) ◽  
pp. 2140-2151 ◽  
Author(s):  
Justin D. Yao ◽  
Peter Bremen ◽  
John C. Middlebrooks

The rat is a widely used species for study of the auditory system. Psychophysical results from rats have shown an inability to discriminate sound source locations within a lateral hemifield, despite showing fairly sharp near-midline acuity. We tested the hypothesis that those characteristics of the rat's sound localization psychophysics are evident in the characteristics of spatial sensitivity of its cortical neurons. In addition, we sought quantitative descriptions of in vivo spatial sensitivity of cortical neurons that would support development of an in vitro experimental model to study cortical mechanisms of spatial hearing. We assessed the spatial sensitivity of single- and multiple-neuron responses in the primary auditory cortex (A1) of urethane-anesthetized rats. Free-field noise bursts were varied throughout 360° of azimuth in the horizontal plane at sound levels from 10 to 40 dB above neural thresholds. All neurons encountered in A1 displayed contralateral-hemifield spatial tuning in that they responded strongly to contralateral sound source locations, their responses cut off sharply for locations near the frontal midline, and they showed weak or no responses to ipsilateral sources. Spatial tuning was quite stable across a 30-dB range of sound levels. Consistent with rat psychophysical results, a linear discriminator analysis of spike counts exhibited high spatial acuity for near-midline sounds and poor discrimination for off-midline locations. Hemifield spatial tuning is the most common pattern across all mammals tested previously. The homogeneous population of neurons in rat area A1 will make an excellent system for study of the mechanisms underlying that pattern.


1998 ◽  
Vol 80 (5) ◽  
pp. 2743-2764 ◽  
Author(s):  
Jos J. Eggermont

Eggermont, Jos J. Representation of spectral and temporal sound features in three cortical fields of the cat. Similarities outweigh differences. J. Neurophysiol. 80: 2743–2764, 1998. This study investigates the degree of similarity of three different auditory cortical areas with respect to the coding of periodic stimuli. Simultaneous single- and multiunit recordings in response to periodic stimuli were made from primary auditory cortex (AI), anterior auditory field (AAF), and secondary auditory cortex (AII) in the cat to addresses the following questions: is there, within each cortical area, a difference in the temporal coding of periodic click trains, amplitude-modulated (AM) noise bursts, and AM tone bursts? Is there a difference in this coding between the three cortical fields? Is the coding based on the temporal modulation transfer function (tMTF) and on the all-order interspike-interval (ISI) histogram the same? Is the perceptual distinction between rhythm and roughness for AM stimuli related to a temporal versus spatial representation of AM frequency in auditory cortex? Are interarea differences in temporal response properties related to differences in frequency tuning? The results showed that: 1) AM stimuli produce much higher best modulation frequencies (BMFs) and limiting rates than periodic click trains. 2) For periodic click trains and AM noise, the BMFs and limiting rates were not significantly different for the three areas. However, for AM tones the BMF and limiting rates were about a factor 2 lower in AAF compared with the other areas. 3) The representation of stimulus periodicity in ISIs resulted in significantly lower mean BMFs and limiting rates compared with those estimated from the tMTFs. The difference was relatively small for periodic click trains but quite large for both AM stimuli, especially in AI and AII. 4) Modulation frequencies <20 Hz were represented in the ISIs, suggesting that rhythm is coded in auditory cortex in temporal fashion. 5) In general only a modest interdependence of spectral- and temporal-response properties in AI and AII was found. The BMFs were correlated positively with characteristic frequency in AAF. The limiting rate was positively correlated with the frequency-tuning curve bandwidth in AI and AII but not in AAF. Only in AAF was a correlation between BMF and minimum latency was found. Thus whereas differences were found in the frequency-tuning curve bandwidth and minimum response latencies among the three areas, the coding of periodic stimuli in these areas was fairly similar with the exception of the very poor representation of AM tones in AII. This suggests a strong parallel processing organization in auditory cortex.


2003 ◽  
Vol 89 (2) ◽  
pp. 1024-1038 ◽  
Author(s):  
Richard A. Reale ◽  
Rick L. Jenison ◽  
John F. Brugge

Transient sounds were delivered from different directions in virtual acoustic space while recording from single neurons in primary auditory cortex (AI) of cats under general anesthesia. The intensity level of the sound source was varied parametrically to determine the operating characteristics of the spatial receptive field. The spatial receptive field was constructed from the onset latency of the response to a sound at each sampled direction. Spatial gradients of response latency composing a receptive field are due partially to a systematic co-dependence on sound-source direction and intensity level. Typically, at any given intensity level, the distribution of response latency within the receptive field was unimodal with a range of approximately 3–4 ms, although for some cells and some levels, the spread could be as much as 20 or as little as 2 ms. Response latency, averaged across directions, differed among neurons for the same intensity level, and also differed among intensity levels for the same neuron. Generally, increases in intensity level resulted in decreases in the mean and variance, which follows an inverse Gaussian distribution. Receptive field models, based on response latency, are developed using multiple parameters (azimuth, elevation, intensity), validated with Monte Carlo simulation, and their spatial filtering described using spherical harmonic analysis. Observations from an ensemble of modeled receptive fields are obtained by linking the inverse Gaussian density to the probabilistic inverse problem of estimating sound-source direction and intensity. Upper bounds on acuity is derived from the ensemble using Fisher information, and the predicted patterns of estimation errors are related to psychophysical performance.


1998 ◽  
Vol 96 (1-2) ◽  
pp. 87-105 ◽  
Author(s):  
Manuel Martín-Loeches ◽  
Berenice Valdés ◽  
Gregorio Gómez-Jarabo ◽  
Francisco J. Rubia

1997 ◽  
Vol 181 (6) ◽  
pp. 615-633 ◽  
Author(s):  
J. R. Mendelson ◽  
C. E. Schreiner ◽  
M. L. Sutter

Sign in / Sign up

Export Citation Format

Share Document