scholarly journals Parametric control of flexible timing through low-dimensional neural manifolds

2021 ◽  
Author(s):  
Manuel Beiran ◽  
Nicolas Meirhaeghe ◽  
Hansem Sohn ◽  
Mehrdad Jazayeri ◽  
Srdjan Ostojic

Biological brains possess an unparalleled ability to generalize adaptive behavioral responses from only a few examples. How neural processes enable this capacity to extrapolate is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity. Here we tested this hypothesis in the framework of flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform and enabled smooth extrapolation to inputs well beyond the training range. Reverse-engineering and theoretical analyses demonstrated that this parametric control of extrapolation relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural data from behaving monkeys confirmed the geometric and dynamical signatures of this mechanism.

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Aishwarya Parthasarathy ◽  
Cheng Tang ◽  
Roger Herikstad ◽  
Loong Fah Cheong ◽  
Shih-Cheng Yen ◽  
...  

Abstract Maintenance of working memory is thought to involve the activity of prefrontal neuronal populations with strong recurrent connections. However, it was recently shown that distractors evoke a morphing of the prefrontal population code, even when memories are maintained throughout the delay. How can a morphing code maintain time-invariant memory information? We hypothesized that dynamic prefrontal activity contains time-invariant memory information within a subspace of neural activity. Using an optimization algorithm, we found a low-dimensional subspace that contains time-invariant memory information. This information was reduced in trials where the animals made errors in the task, and was also found in periods of the trial not used to find the subspace. A bump attractor model replicated these properties, and provided predictions that were confirmed in the neural data. Our results suggest that the high-dimensional responses of prefrontal cortex contain subspaces where different types of information can be simultaneously encoded with minimal interference.


2021 ◽  
Author(s):  
Javier Orlandi ◽  
Mohammad Adbolrahmani ◽  
Ryo Aoki ◽  
Dmitry Lyamzin ◽  
Andrea Benucci

Abstract Choice information appears in the brain as distributed signals with top-down and bottom-up components that together support decision-making computations. In sensory and associative cortical regions, the presence of choice signals, their strength, and area specificity are known to be elusive and changeable, limiting a cohesive understanding of their computational significance. In this study, examining the mesoscale activity in mouse posterior cortex during a complex visual discrimination task, we found that broadly distributed choice signals defined a decision variable in a low-dimensional embedding space of multi-area activations, particularly along the ventral visual stream. The subspace they defined was near-orthogonal to concurrently represented sensory and motor-related activations, and it was modulated by task difficulty and contextually by the animals’ attention state. To mechanistically relate choice representations to decision-making computations, we trained recurrent neural networks with the animals’ choices and found an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. In conclusion, our results demonstrated an independent decision variable broadly represented in the posterior cortex, controlled by task features and cognitive demands. Its dynamics reflected decision computations, possibly linked to context-dependent feedback signals used for probabilistic-inference computations in variable animal-environment interactions.


2021 ◽  
Author(s):  
Manuel Beiran ◽  
Nicolas Meirhaeghe ◽  
Hansem Sohn ◽  
Mehrdad Jazayeri ◽  
Srdjan Ostojic

2021 ◽  
Author(s):  
Corson N Areshenkoff ◽  
Daniel J Gale ◽  
Joe Y Nashed ◽  
Dominic Standage ◽  
John Randall Flanagan ◽  
...  

Humans vary greatly in their motor learning abilities, yet little is known about the neural mechanisms that underlie this variability. Recent neuroimaging and electrophysiological studies demonstrate that large-scale neural dynamics inhabit a low-dimensional subspace or manifold, and that learning is constrained by this intrinsic manifold architecture. Here we asked, using functional MRI, whether subject-level differences in neural excursion from manifold structure can explain differences in learning across participants. We had subjects perform a sensorimotor adaptation task in the MRI scanner on two consecutive days, allowing us to assess their learning performance across days, as well as continuously measure brain activity. We find that the overall neural excursion from manifold activity in both cognitive and sensorimotor brain networks is associated with differences in subjects' patterns of learning and relearning across days. These findings suggest that off-manifold activity provides an index of the relative engagement of different neural systems during learning, and that intersubject differences in patterns of learning and relearning across days are related to reconfiguration processes in cognitive and sensorimotor networks during learning.


Author(s):  
Samuel Melton ◽  
Sharad Ramanathan

Abstract Motivation Recent technological advances produce a wealth of high-dimensional descriptions of biological processes, yet extracting meaningful insight and mechanistic understanding from these data remains challenging. For example, in developmental biology, the dynamics of differentiation can now be mapped quantitatively using single-cell RNA sequencing, yet it is difficult to infer molecular regulators of developmental transitions. Here, we show that discovering informative features in the data is crucial for statistical analysis as well as making experimental predictions. Results We identify features based on their ability to discriminate between clusters of the data points. We define a class of problems in which linear separability of clusters is hidden in a low-dimensional space. We propose an unsupervised method to identify the subset of features that define a low-dimensional subspace in which clustering can be conducted. This is achieved by averaging over discriminators trained on an ensemble of proposed cluster configurations. We then apply our method to single-cell RNA-seq data from mouse gastrulation, and identify 27 key transcription factors (out of 409 total), 18 of which are known to define cell states through their expression levels. In this inferred subspace, we find clear signatures of known cell types that eluded classification prior to discovery of the correct low-dimensional subspace. Availability and implementation https://github.com/smelton/SMD. Supplementary information Supplementary data are available at Bioinformatics online.


1997 ◽  
Vol 78 (5) ◽  
pp. 2254-2268 ◽  
Author(s):  
Hisao Nishijo ◽  
Ralph Norgren

Nishijo, Hisao and Ralph Norgren. Parabrachial neural coding of taste stimuli in awake rats. J. Neurophysiol. 78: 2254–2268, 1997. In awake, behaving rats, the activity of 74 single neurons in the pontine parabrachial nucleus (PBN) was recorded in response to sapid stimulation by 15 chemicals. Of these, 44 taste cells were tested with all 15 stimuli. Based on their responsiveness to 4 standard stimuli, these neurons were categorized as follows: 23 NaCl-best, 15 sucrose-best, 5 citric acid–best, and 1 quinine HCl-best. Several forms of multivariate analyses indicated that the taste responses matched both the behavioral responses to and, less well, the chemical structure of, the sapid stimuli. A hierarchical cluster analysis of the neurons substantially confirmed the best-stimulus categorization, but separated the NaCl-best cells into those that responded more to Na+-containing salts and those that responded more to Cl−-containing salts. The cells that responded best to the Na+ moiety actually were somewhat more correlated with the sucrose-best cells than with those that responded to the Cl−-containing stimuli. Citric acid–best neurons and the lone quinine-best unit formed a single cluster of neurons that responded well to acids, as well as to NH4Cl and, to a lesser extent, NaNO3. A factor analysis of the neuronal response profiles revealed that three factors accounted for 78.8% of the variance in the sample. Similar analyses of the stimuli suggested that PBN neurons respond to four or five sets of stimuli related by their chemical makeup or by human psychophysical reports. The capacity of rats to make these discriminations has been documented by other behavioral studies in which rodents generalize across sapid chemicals within each of 5 stimulus categories. Furthermore, a simulation analysis of the neural data replicated behavioral results that used amiloride, a Na+ channel blocker, in which rats generalized NaCl to non-Na+, Cl− salts. Thus, using a variety of analyses, in awake rats, the activity of PBN taste neurons tracks their behavioral responses to a variety of chemical stimuli.


2020 ◽  
Vol 12 (18) ◽  
pp. 2979
Author(s):  
Le Sun ◽  
Chengxun He ◽  
Yuhui Zheng ◽  
Songze Tang

During the process of signal sampling and digital imaging, hyperspectral images (HSI) inevitably suffer from the contamination of mixed noises. The fidelity and efficiency of subsequent applications are considerably reduced along with this degradation. Recently, as a formidable implement for image processing, low-rank regularization has been widely extended to the restoration of HSI. Meanwhile, further exploration of the non-local self-similarity of low-rank images are proven useful in exploiting the spatial redundancy of HSI. Better preservation of spatial-spectral features is achieved under both low-rank and non-local regularizations. However, existing methods generally regularize the original space of HSI, the exploration of the intrinsic properties in subspace, which leads to better denoising performance, is relatively rare. To address these challenges, a joint method of subspace low-rank learning and non-local 4-d transform filtering, named SLRL4D, is put forward for HSI restoration. Technically, the original HSI is projected into a low-dimensional subspace. Then, both spectral and spatial correlations are explored simultaneously by imposing low-rank learning and non-local 4-d transform filtering on the subspace. The alternating direction method of multipliers-based algorithm is designed to solve the formulated convex signal-noise isolation problem. Finally, experiments on multiple datasets are conducted to illustrate the accuracy and efficiency of SLRL4D.


2006 ◽  
Vol 03 (01) ◽  
pp. 45-51
Author(s):  
YANWEI PANG ◽  
ZHENGKAI LIU ◽  
YUEFANG SUN

Subspace-based face recognition method aims to find a low-dimensional subspace of face appearance embedded in a high-dimensional image space. The differences between different methods lie in their different motivations and objective functions. The objective function of the proposed method is formed by combining the ideas of linear Laplacian eigenmaps and linear discriminant analysis. The actual computation of the subspace reduces to a maximum eigenvalue problem. Major advantage of the proposed method over traditional methods is that it utilizes both local manifold structure information and discriminant information of the training data. Experimental results on the AR face databases demonstrate the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document