scholarly journals Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex

eNeuro ◽  
2016 ◽  
Vol 3 (3) ◽  
pp. ENEURO.0071-16.2016 ◽  
Author(s):  
Yonatan I. Fishman ◽  
Christophe Micheyl ◽  
Mitchell Steinschneider
2021 ◽  
Author(s):  
Pilar Montes-Lourido ◽  
Manaswini Kar ◽  
Stephen V David ◽  
Srivatsun Sadagopan

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how non-selective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from non-selective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in non-selective and feature-selective populations remain open questions. In this study, using unanesthetized guinea pigs, a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in three auditory processing stages: the thalamus (vMGB), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call-selectivity with about a third of neurons responding to only one or two call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4 stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information, and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1, and set the stage for further mechanistic studies.


2001 ◽  
Vol 85 (3) ◽  
pp. 1220-1234 ◽  
Author(s):  
Didier A. Depireux ◽  
Jonathan Z. Simon ◽  
David J. Klein ◽  
Shihab A. Shamma

To understand the neural representation of broadband, dynamic sounds in primary auditory cortex (AI), we characterize responses using the spectro-temporal response field (STRF). The STRF describes, predicts, and fully characterizes the linear dynamics of neurons in response to sounds with rich spectro-temporal envelopes. It is computed from the responses to elementary “ripples,” a family of sounds with drifting sinusoidal spectral envelopes. The collection of responses to all elementary ripples is the spectro-temporal transfer function. The complex spectro-temporal envelope of any broadband, dynamic sound can expressed as the linear sum of individual ripples. Previous experiments using ripples with downward drifting spectra suggested that the transfer function is separable, i.e., it is reducible into a product of purely temporal and purely spectral functions. Here we measure the responses to upward and downward drifting ripples, assuming reparability within each direction, to determine if the total bidirectional transfer function is fully separable. In general, the combined transfer function for two directions is not symmetric, and hence units in AI are not, in general, fully separable. Consequently, many AI units have complex response properties such as sensitivity to direction of motion, though most inseparable units are not strongly directionally selective. We show that for most neurons, the lack of full separability stems from differences between the upward and downward spectral cross-sections but not from the temporal cross-sections; this places strong constraints on the neural inputs of these AI units.


2019 ◽  
Vol 121 (3) ◽  
pp. 785-798 ◽  
Author(s):  
Zhenling Zhao ◽  
Lanlan Ma ◽  
Yifei Wang ◽  
Ling Qin

Discriminating biologically relevant sounds is crucial for survival. The neurophysiological mechanisms that mediate this process must register both the reward significance and the physical parameters of acoustic stimuli. Previous experiments have revealed that the primary function of the auditory cortex (AC) is to provide a neural representation of the acoustic parameters of sound stimuli. However, how the brain associates acoustic signals with reward remains unresolved. The amygdala (AMY) and medial prefrontal cortex (mPFC) play keys role in emotion and learning, but it is unknown whether AMY and mPFC neurons are involved in sound discrimination or how the roles of AMY and mPFC neurons differ from those of AC neurons. To examine this, we recorded neural activity in the primary auditory cortex (A1), AMY, and mPFC of cats while they performed a Go/No-go task to discriminate sounds with different temporal patterns. We found that the activity of A1 neurons faithfully coded the temporal patterns of sound stimuli; this activity was not affected by the cats’ behavioral choices. The neural representation of stimulus patterns decreased in the AMY, but the neural activity increased when the cats were preparing to discriminate the sound stimuli and waiting for reward. Neural activity in the mPFC did not represent sound patterns, but it showed a clear association with reward and was modulated by the cats’ behavioral choices. Our results indicate that the initial auditory representation in A1 is gradually transformed into a stimulus–reward association in the AMY and mPFC to ultimately generate a behavioral choice. NEW & NOTEWORTHY We compared the characteristics of neural activities of primary auditory cortex (A1), amygdala (AMY), and medial prefrontal cortex (mPFC) while cats were performing the same auditory discrimination task. Our results show that there is a gradual transformation of the neural code from a faithful temporal representation of the stimulus in A1, which is insensitive to behavioral choices, to an association with the predictive reward in AMY and mPFC, which, to some extent, is correlated with the animal’s behavioral choice.


2019 ◽  
Author(s):  
Ediz Sohoglu ◽  
Sukhbinder Kumar ◽  
Maria Chait ◽  
Timothy D. Griffiths

AbstractUsing fMRI and multivariate pattern analysis, we determined whether acoustic features are represented by independent or integrated neural codes in human cortex. Male and female listeners heard band-pass noise varying simultaneously in spectral (frequency) and temporal (amplitude-modulation [AM] rate) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, neural representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features. Direct between-region comparisons show that whereas independent coding of frequency and AM weakened with increasing levels of the hierarchy, integrated coding strengthened at the transition between non-core and parietal cortex. Our findings support the notion that primary auditory cortex can represent component acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of acoustic input.Significance statementA major goal for neuroscience is discovering the sensory features to which the brain is tuned and how those features are integrated into cohesive perception. We used whole-brain human fMRI and a statistical modeling approach to quantify the extent to which sound features are represented separately or in an integrated fashion in cortical activity patterns. We show that frequency and AM rate, two acoustic features that are fundamental to characterizing biological important sounds such as speech, are represented separately in primary auditory cortex but in an integrated fashion in parietal cortex. These findings suggest that representations in primary auditory cortex can be simpler than previously thought and also implicate a role for parietal cortex in integrating features for coherent perception.


2018 ◽  
Author(s):  
Nikolas A. Francis ◽  
Diego Elgueda ◽  
Bernhard Englitz ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

AbstractRapid task-related plasticity is a neural correlate of selective attention in primary auditory cortex (A1). Top-down feedback from higher-order cortex may drive task-related plasticity in A1, characterized by enhanced neural representation of behaviorally meaningful sounds during auditory task performance. Since intracortical connectivity is greater within A1 layers 2/3 (L2/3) than in layers 4-6 (L4-6), we hypothesized that enhanced representation of behaviorally meaningful sounds might be greater in A1 L2/3 than L4-6. To test this hypothesis and study the laminar profile of task-related plasticity, we trained 2 ferrets to detect pure tones while we recorded laminar activity across a 1.8 mm depth in A1. In each experiment, we analyzed current-source densities (CSDs), high-gamma local field potentials (LFPs), and multi-unit spiking in response to identical acoustic stimuli during both passive listening and active task performance. We found that neural responses to auditory targets were enhanced during task performance, and target enhancement was greater in L2/3 than in L4-6. Spectrotemporal receptive fields (STRFs) computed from CSDs, high-gamma LFPs, and multi-unit spiking showed similar increases in auditory target selectivity, also greatest in L2/3. Our results suggest that activity within intracortical networks plays a key role in shaping the underlying neural mechanisms of selective attention.


2000 ◽  
Vol 83 (4) ◽  
pp. 1856-1863 ◽  
Author(s):  
Syed A. Chowdhury ◽  
Nobuo Suga

In a search phase of echolocation, big brown bats, Eptesicus fuscus, emit biosonar pulses at a rate of 10/s and listen to echoes. When a short acoustic stimulus was repetitively delivered at this rate, the reorganization of the frequency map of the primary auditory cortex took place at and around the neurons tuned to the frequency of the acoustic stimulus. Such reorganization became larger when the acoustic stimulus was paired with electrical stimulation of the cortical neurons tuned to the frequency of the acoustic stimulus. This reorganization was mainly due to the decrease in the best frequencies of the neurons that had best frequencies slightly higher than those of the electrically stimulated cortical neurons or the frequency of the acoustic stimulus. Neurons with best frequencies slightly lower than those of the acoustically and/or electrically stimulated neurons slightly increased their best frequencies. These changes resulted in the over-representation of repetitively delivered acoustic stimulus. Because the over-representation resulted in under-representation of other frequencies, the changes increased the contrast of the neural representation of the acoustic stimulus. Best frequency shifts for over-representation were associated with sharpening of frequency-tuning curves of 25% of the neurons studied. Because of the increases in both the contrast of neural representation and the sharpness of tuning, the over-representation of the acoustic stimulus is accompanied with an improvement of analysis of the acoustic stimulus.


PLoS Biology ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. e3001299
Author(s):  
Pilar Montes-Lourido ◽  
Manaswini Kar ◽  
Stephen V. David ◽  
Srivatsun Sadagopan

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages—the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.


2007 ◽  
Vol 97 (1) ◽  
pp. 144-158 ◽  
Author(s):  
Boris Gourévitch ◽  
Jos J. Eggermont

This study shows the neural representation of cat vocalizations, natural and altered with respect to carrier and envelope, as well as time-reversed, in four different areas of the auditory cortex. Multiunit activity recorded in primary auditory cortex (AI) of anesthetized cats mainly occurred at onsets (<200-ms latency) and at subsequent major peaks of the vocalization envelope and was significantly inhibited during the stationary course of the stimuli. The first 200 ms of processing appears crucial for discrimination of a vocalization in AI. The dorsal and ventral parts of AI appear to have different roles in coding vocalizations. The dorsal part potentially discriminated carrier-altered meows, whereas the ventral part showed differences primarily in its response to natural and time-reversed meows. In the posterior auditory field, the different temporal response types of neurons, as determined by their poststimulus time histograms, showed discrimination for carrier alterations in the meow. Sustained firing neurons in the posterior ectosylvian gyrus (EP) could discriminate, among others, by neural synchrony, temporal envelope alterations of the meow, and time reversion thereof. These findings suggest an important role of EP in the detection of information conveyed by the alterations of vocalizations. Discrimination of the neural responses to different alterations of vocalizations could be based on either firing rate, type of temporal response, or neural synchrony, suggesting that all these are likely simultaneously used in processing of natural and altered conspecific vocalizations.


Sign in / Sign up

Export Citation Format

Share Document