scholarly journals Multivoxel codes for representing and integrating acoustic features in human cortex

2019 ◽  
Author(s):  
Ediz Sohoglu ◽  
Sukhbinder Kumar ◽  
Maria Chait ◽  
Timothy D. Griffiths

AbstractUsing fMRI and multivariate pattern analysis, we determined whether acoustic features are represented by independent or integrated neural codes in human cortex. Male and female listeners heard band-pass noise varying simultaneously in spectral (frequency) and temporal (amplitude-modulation [AM] rate) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, neural representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features. Direct between-region comparisons show that whereas independent coding of frequency and AM weakened with increasing levels of the hierarchy, integrated coding strengthened at the transition between non-core and parietal cortex. Our findings support the notion that primary auditory cortex can represent component acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of acoustic input.Significance statementA major goal for neuroscience is discovering the sensory features to which the brain is tuned and how those features are integrated into cohesive perception. We used whole-brain human fMRI and a statistical modeling approach to quantify the extent to which sound features are represented separately or in an integrated fashion in cortical activity patterns. We show that frequency and AM rate, two acoustic features that are fundamental to characterizing biological important sounds such as speech, are represented separately in primary auditory cortex but in an integrated fashion in parietal cortex. These findings suggest that representations in primary auditory cortex can be simpler than previously thought and also implicate a role for parietal cortex in integrating features for coherent perception.

2022 ◽  
Author(s):  
Kaushik J Lakshminarasimhan ◽  
Eric Avila ◽  
Xaq Pitkow ◽  
Dora E Angelaki

Success in many real-world tasks depends on our ability to dynamically track hidden states of the world. To understand the underlying neural computations, we recorded brain activity in posterior parietal cortex (PPC) of monkeys navigating by optic flow to a hidden target location within a virtual environment, without explicit position cues. In addition to sequential neural dynamics and strong interneuronal interactions, we found that the hidden state -- monkey's displacement from the goal -- was encoded in single neurons, and could be dynamically decoded from population activity. The decoded estimates predicted navigation performance on individual trials. Task manipulations that perturbed the world model induced substantial changes in neural interactions, and modified the neural representation of the hidden state, while representations of sensory and motor variables remained stable. The findings were recapitulated by a task-optimized recurrent neural network model, suggesting that neural interactions in PPC embody the world model to consolidate information and track task-relevant hidden states.


eNeuro ◽  
2016 ◽  
Vol 3 (3) ◽  
pp. ENEURO.0071-16.2016 ◽  
Author(s):  
Yonatan I. Fishman ◽  
Christophe Micheyl ◽  
Mitchell Steinschneider

2021 ◽  
Author(s):  
Pilar Montes-Lourido ◽  
Manaswini Kar ◽  
Stephen V David ◽  
Srivatsun Sadagopan

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how non-selective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from non-selective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in non-selective and feature-selective populations remain open questions. In this study, using unanesthetized guinea pigs, a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in three auditory processing stages: the thalamus (vMGB), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call-selectivity with about a third of neurons responding to only one or two call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4 stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information, and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1, and set the stage for further mechanistic studies.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 142 ◽  
Author(s):  
Ayan Sengupta ◽  
Stefan Pollmann ◽  
Michael Hanke

Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
A. Bosco ◽  
R. Breveglieri ◽  
M. Filippini ◽  
C. Galletti ◽  
P. Fattori

2016 ◽  
Vol 28 (1) ◽  
pp. 125-139 ◽  
Author(s):  
James E. Kragel ◽  
Sean M. Polyn

Neuroimaging studies of recognition memory have identified distinct patterns of cortical activity associated with two sets of cognitive processes: Recollective processes supporting retrieval of information specifying a probe item's original source are associated with the posterior hippocampus, ventral posterior parietal cortex, and medial pFC. Familiarity processes supporting the correct identification of previously studied probes (in the absence of a recollective response) are associated with activity in anterior medial temporal lobe (MTL) structures including the perirhinal cortex and anterior hippocampus, in addition to lateral prefrontal and dorsal posterior parietal cortex. Here, we address an open question in the cognitive neuroscientific literature: To what extent are these same neurocognitive processes engaged during an internally directed memory search task like free recall? We recorded fMRI activity while participants performed a series of free recall and source recognition trials, and we used a combination of univariate and multivariate analysis techniques to compare neural activation profiles across the two tasks. Univariate analyses showed that posterior MTL regions were commonly associated with recollective processes during source recognition and with free recall responses. Prefrontal and posterior parietal regions were commonly associated with familiarity processes and free recall responses, whereas anterior MTL regions were only associated with familiarity processes during recognition. In contrast with the univariate results, free recall activity patterns characterized using multivariate pattern analysis did not reliably match the neural patterns associated with recollective processes. However, these free recall patterns did reliably match patterns associated with familiarity processes, supporting theories of memory in which common cognitive mechanisms support both item recognition and free recall.


2001 ◽  
Vol 85 (3) ◽  
pp. 1220-1234 ◽  
Author(s):  
Didier A. Depireux ◽  
Jonathan Z. Simon ◽  
David J. Klein ◽  
Shihab A. Shamma

To understand the neural representation of broadband, dynamic sounds in primary auditory cortex (AI), we characterize responses using the spectro-temporal response field (STRF). The STRF describes, predicts, and fully characterizes the linear dynamics of neurons in response to sounds with rich spectro-temporal envelopes. It is computed from the responses to elementary “ripples,” a family of sounds with drifting sinusoidal spectral envelopes. The collection of responses to all elementary ripples is the spectro-temporal transfer function. The complex spectro-temporal envelope of any broadband, dynamic sound can expressed as the linear sum of individual ripples. Previous experiments using ripples with downward drifting spectra suggested that the transfer function is separable, i.e., it is reducible into a product of purely temporal and purely spectral functions. Here we measure the responses to upward and downward drifting ripples, assuming reparability within each direction, to determine if the total bidirectional transfer function is fully separable. In general, the combined transfer function for two directions is not symmetric, and hence units in AI are not, in general, fully separable. Consequently, many AI units have complex response properties such as sensitivity to direction of motion, though most inseparable units are not strongly directionally selective. We show that for most neurons, the lack of full separability stems from differences between the upward and downward spectral cross-sections but not from the temporal cross-sections; this places strong constraints on the neural inputs of these AI units.


2012 ◽  
Vol 24 (9) ◽  
pp. 1896-1907 ◽  
Author(s):  
I-Hui Hsieh ◽  
Paul Fillmore ◽  
Feng Rong ◽  
Gregory Hickok ◽  
Kourosh Saberi

Frequency modulation (FM) is an acoustic feature of nearly all complex sounds. Directional FM sweeps are especially pervasive in speech, music, animal vocalizations, and other natural sounds. Although the existence of FM-selective cells in the auditory cortex of animals has been documented, evidence in humans remains equivocal. Here we used multivariate pattern analysis to identify cortical selectivity for direction of a multitone FM sweep. This method distinguishes one pattern of neural activity from another within the same ROI, even when overall level of activity is similar, allowing for direct identification of FM-specialized networks. Standard contrast analysis showed that despite robust activity in auditory cortex, no clusters of activity were associated with up versus down sweeps. Multivariate pattern analysis classification, however, identified two brain regions as selective for FM direction, the right primary auditory cortex on the supratemporal plane and the left anterior region of the superior temporal gyrus. These findings are the first to directly demonstrate existence of FM direction selectivity in the human auditory cortex.


2010 ◽  
Vol 104 (6) ◽  
pp. 3494-3509 ◽  
Author(s):  
Barbara Heider ◽  
Anushree Karnik ◽  
Nirmala Ramalingam ◽  
Ralph M. Siegel

Visually guided hand movements in primates require an interconnected network of various cortical areas. Single unit firing rate from area 7a and dorsal prelunate (DP) neurons of macaque posterior parietal cortex (PPC) was recorded during reaching movements to targets at variable locations and under different eye position conditions. In the eye position–varied task, the reach target was always foveated; thus eye position varied with reach target location. In the retinal-varied task, the monkey reached to targets at variable retinotopic locations while eye position was kept constant in the center. Spatial tuning was examined with respect to temporal (task epoch) and contextual (task condition) aspects, and response fields were compared. The analysis showed distinct tuning types. The majority of neurons changed their gain field tuning and retinotopic tuning between different phases of the task. Between the onset of visual stimulation and the preparatory phase (before the go signal), about one half the neurons altered their firing rate significantly. Spatial response fields during preparation and initiation epochs were strongly influenced by the task condition (eye position varied vs. retinal varied), supporting a strong role of eye position during visually guided reaching. DP neurons, classically considered visual, showed reach related modulation similar to 7a neurons. This study shows that both area 7a and DP are modulated during reaching behavior in primates. The various tuning types in both areas suggest distinct populations recruiting different circuits during visually guided reaching.


Sign in / Sign up

Export Citation Format

Share Document