Spectro-Temporal Response Field Characterization With Dynamic Ripples in Ferret Primary Auditory Cortex

2001 ◽  
Vol 85 (3) ◽  
pp. 1220-1234 ◽  
Author(s):  
Didier A. Depireux ◽  
Jonathan Z. Simon ◽  
David J. Klein ◽  
Shihab A. Shamma

To understand the neural representation of broadband, dynamic sounds in primary auditory cortex (AI), we characterize responses using the spectro-temporal response field (STRF). The STRF describes, predicts, and fully characterizes the linear dynamics of neurons in response to sounds with rich spectro-temporal envelopes. It is computed from the responses to elementary “ripples,” a family of sounds with drifting sinusoidal spectral envelopes. The collection of responses to all elementary ripples is the spectro-temporal transfer function. The complex spectro-temporal envelope of any broadband, dynamic sound can expressed as the linear sum of individual ripples. Previous experiments using ripples with downward drifting spectra suggested that the transfer function is separable, i.e., it is reducible into a product of purely temporal and purely spectral functions. Here we measure the responses to upward and downward drifting ripples, assuming reparability within each direction, to determine if the total bidirectional transfer function is fully separable. In general, the combined transfer function for two directions is not symmetric, and hence units in AI are not, in general, fully separable. Consequently, many AI units have complex response properties such as sensitivity to direction of motion, though most inseparable units are not strongly directionally selective. We show that for most neurons, the lack of full separability stems from differences between the upward and downward spectral cross-sections but not from the temporal cross-sections; this places strong constraints on the neural inputs of these AI units.

2007 ◽  
Vol 97 (1) ◽  
pp. 144-158 ◽  
Author(s):  
Boris Gourévitch ◽  
Jos J. Eggermont

This study shows the neural representation of cat vocalizations, natural and altered with respect to carrier and envelope, as well as time-reversed, in four different areas of the auditory cortex. Multiunit activity recorded in primary auditory cortex (AI) of anesthetized cats mainly occurred at onsets (<200-ms latency) and at subsequent major peaks of the vocalization envelope and was significantly inhibited during the stationary course of the stimuli. The first 200 ms of processing appears crucial for discrimination of a vocalization in AI. The dorsal and ventral parts of AI appear to have different roles in coding vocalizations. The dorsal part potentially discriminated carrier-altered meows, whereas the ventral part showed differences primarily in its response to natural and time-reversed meows. In the posterior auditory field, the different temporal response types of neurons, as determined by their poststimulus time histograms, showed discrimination for carrier alterations in the meow. Sustained firing neurons in the posterior ectosylvian gyrus (EP) could discriminate, among others, by neural synchrony, temporal envelope alterations of the meow, and time reversion thereof. These findings suggest an important role of EP in the detection of information conveyed by the alterations of vocalizations. Discrimination of the neural responses to different alterations of vocalizations could be based on either firing rate, type of temporal response, or neural synchrony, suggesting that all these are likely simultaneously used in processing of natural and altered conspecific vocalizations.


eNeuro ◽  
2016 ◽  
Vol 3 (3) ◽  
pp. ENEURO.0071-16.2016 ◽  
Author(s):  
Yonatan I. Fishman ◽  
Christophe Micheyl ◽  
Mitchell Steinschneider

1998 ◽  
Vol 80 (5) ◽  
pp. 2743-2764 ◽  
Author(s):  
Jos J. Eggermont

Eggermont, Jos J. Representation of spectral and temporal sound features in three cortical fields of the cat. Similarities outweigh differences. J. Neurophysiol. 80: 2743–2764, 1998. This study investigates the degree of similarity of three different auditory cortical areas with respect to the coding of periodic stimuli. Simultaneous single- and multiunit recordings in response to periodic stimuli were made from primary auditory cortex (AI), anterior auditory field (AAF), and secondary auditory cortex (AII) in the cat to addresses the following questions: is there, within each cortical area, a difference in the temporal coding of periodic click trains, amplitude-modulated (AM) noise bursts, and AM tone bursts? Is there a difference in this coding between the three cortical fields? Is the coding based on the temporal modulation transfer function (tMTF) and on the all-order interspike-interval (ISI) histogram the same? Is the perceptual distinction between rhythm and roughness for AM stimuli related to a temporal versus spatial representation of AM frequency in auditory cortex? Are interarea differences in temporal response properties related to differences in frequency tuning? The results showed that: 1) AM stimuli produce much higher best modulation frequencies (BMFs) and limiting rates than periodic click trains. 2) For periodic click trains and AM noise, the BMFs and limiting rates were not significantly different for the three areas. However, for AM tones the BMF and limiting rates were about a factor 2 lower in AAF compared with the other areas. 3) The representation of stimulus periodicity in ISIs resulted in significantly lower mean BMFs and limiting rates compared with those estimated from the tMTFs. The difference was relatively small for periodic click trains but quite large for both AM stimuli, especially in AI and AII. 4) Modulation frequencies <20 Hz were represented in the ISIs, suggesting that rhythm is coded in auditory cortex in temporal fashion. 5) In general only a modest interdependence of spectral- and temporal-response properties in AI and AII was found. The BMFs were correlated positively with characteristic frequency in AAF. The limiting rate was positively correlated with the frequency-tuning curve bandwidth in AI and AII but not in AAF. Only in AAF was a correlation between BMF and minimum latency was found. Thus whereas differences were found in the frequency-tuning curve bandwidth and minimum response latencies among the three areas, the coding of periodic stimuli in these areas was fairly similar with the exception of the very poor representation of AM tones in AII. This suggests a strong parallel processing organization in auditory cortex.


2014 ◽  
Vol 315 ◽  
pp. 1-9 ◽  
Author(s):  
James B. Fallon ◽  
Robert K. Shepherd ◽  
David A.X. Nayagam ◽  
Andrew K. Wise ◽  
Leon F. Heffer ◽  
...  

2021 ◽  
Author(s):  
Pilar Montes-Lourido ◽  
Manaswini Kar ◽  
Stephen V David ◽  
Srivatsun Sadagopan

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how non-selective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from non-selective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in non-selective and feature-selective populations remain open questions. In this study, using unanesthetized guinea pigs, a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in three auditory processing stages: the thalamus (vMGB), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call-selectivity with about a third of neurons responding to only one or two call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4 stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information, and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1, and set the stage for further mechanistic studies.


Epilepsia ◽  
2005 ◽  
Vol 46 (2) ◽  
pp. 171-178 ◽  
Author(s):  
Pamela A. Valentine ◽  
G. Campbell Teskey ◽  
Jos J. Eggermont

2019 ◽  
Vol 121 (3) ◽  
pp. 785-798 ◽  
Author(s):  
Zhenling Zhao ◽  
Lanlan Ma ◽  
Yifei Wang ◽  
Ling Qin

Discriminating biologically relevant sounds is crucial for survival. The neurophysiological mechanisms that mediate this process must register both the reward significance and the physical parameters of acoustic stimuli. Previous experiments have revealed that the primary function of the auditory cortex (AC) is to provide a neural representation of the acoustic parameters of sound stimuli. However, how the brain associates acoustic signals with reward remains unresolved. The amygdala (AMY) and medial prefrontal cortex (mPFC) play keys role in emotion and learning, but it is unknown whether AMY and mPFC neurons are involved in sound discrimination or how the roles of AMY and mPFC neurons differ from those of AC neurons. To examine this, we recorded neural activity in the primary auditory cortex (A1), AMY, and mPFC of cats while they performed a Go/No-go task to discriminate sounds with different temporal patterns. We found that the activity of A1 neurons faithfully coded the temporal patterns of sound stimuli; this activity was not affected by the cats’ behavioral choices. The neural representation of stimulus patterns decreased in the AMY, but the neural activity increased when the cats were preparing to discriminate the sound stimuli and waiting for reward. Neural activity in the mPFC did not represent sound patterns, but it showed a clear association with reward and was modulated by the cats’ behavioral choices. Our results indicate that the initial auditory representation in A1 is gradually transformed into a stimulus–reward association in the AMY and mPFC to ultimately generate a behavioral choice. NEW & NOTEWORTHY We compared the characteristics of neural activities of primary auditory cortex (A1), amygdala (AMY), and medial prefrontal cortex (mPFC) while cats were performing the same auditory discrimination task. Our results show that there is a gradual transformation of the neural code from a faithful temporal representation of the stimulus in A1, which is insensitive to behavioral choices, to an association with the predictive reward in AMY and mPFC, which, to some extent, is correlated with the animal’s behavioral choice.


Sign in / Sign up

Export Citation Format

Share Document