scholarly journals Cortical control of virtual self-motion using task-specific subspaces

Author(s):  
Karen E Schroeder ◽  
Sean M Perkins ◽  
Qi Wang ◽  
Mark M Churchland

AbstractBrain-machine interfaces (BMIs) for reaching have enjoyed continued performance improvements, yet there remains significant need for BMIs that control other movement classes. The question of how to decode neural activity is inexorably linked with the intrinsic covariance structure of that activity, which may depend strongly upon movement class. Here, we develop a self-motion BMI based on cortical activity as monkeys cycle a hand-held pedal to progress along a virtual track. Unlike during reaching, there were no high-variance dimensions that directly correlated with to-be-decoded variables. Yet this challenge yielded an opportunity: we could decode a single variable – self-motion – by non-linearly leveraging structure that spanned many high-variance neural dimensions. Online BMI-control success rates approached those during manual control. Our results argue that decoding can and should be task-specific, and suggest a broad principle: even when the decoded output is low-dimensional, it can be beneficial to leverage a multi-dimensional high-variance subspace.

2021 ◽  
pp. JN-RM-2687-20
Author(s):  
Karen E Schroeder ◽  
Sean M Perkins ◽  
Qi Wang ◽  
Mark M Churchland

2013 ◽  
Vol 461 ◽  
pp. 565-569 ◽  
Author(s):  
Fang Wang ◽  
Kai Xu ◽  
Qiao Sheng Zhang ◽  
Yi Wen Wang ◽  
Xiao Xiang Zheng

Brain-machine interfaces (BMIs) decode cortical neural spikes of paralyzed patients to control external devices for the purpose of movement restoration. Neuroplasticity induced by conducting a relatively complex task within multistep, is helpful to performance improvements of BMI system. Reinforcement learning (RL) allows the BMI system to interact with the environment to learn the task adaptively without a teacher signal, which is more appropriate to the case for paralyzed patients. In this work, we proposed to apply Q(λ)-learning to multistep goal-directed tasks using users neural activity. Neural data were recorded from M1 of a monkey manipulating a joystick in a center-out task. Compared with a supervised learning approach, significant BMI control was achieved with correct directional decoding in 84.2% and 81% of the trials from naïve states. The results demonstrate that the BMI system was able to complete a task by interacting with the environment, indicating that RL-based methods have the potential to develop more natural BMI systems.


1993 ◽  
Vol 54 (1) ◽  
pp. 93-107 ◽  
Author(s):  
Fred H. Previc ◽  
Robert V. Kenyon ◽  
Erwin R. Boer ◽  
Beverly H. Johnson

2018 ◽  
Author(s):  
Stefano Recanatesi ◽  
Gabriel Koch Ocker ◽  
Michael A. Buice ◽  
Eric Shea-Brown

AbstractThe dimensionality of a network’s collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.Author summaryNew recording technologies are producing an amazing explosion of data on neural activity. These data reveal the simultaneous activity of hundreds or even thousands of neurons. In principle, the activity of these neurons could explore a vast space of possible patterns. This is what is meant by high-dimensional activity: the number of degrees of freedom (or “modes”) of multineuron activity is large, perhaps as large as the number of neurons themselves. In practice, estimates of dimensionality differ strongly from case to case, and do so in interesting ways across experiments, species, and brain areas. The outcome is important for much more than just accurately describing neural activity: findings of low dimension have been proposed to allow data compression, denoising, and easily readable neural codes, while findings of high dimension have been proposed as signatures of powerful and general computations. So what is it about a neural circuit that leads to one case or the other? Here, we derive a set of principles that inform how the connectivity of a spiking neural network determines the dimensionality of the activity that it produces. These show that, in some cases, highly localized features of connectivity have strong control over a network’s global dimensionality—an interesting finding in the context of, e.g., learning rules that occur locally. We also show how dimension can be much different than first meets the eye with typical “pairwise” measurements, and how stimuli and intrinsic connectivity interact in shaping the overall dimension of a network’s response.


2021 ◽  
Vol 17 (2) ◽  
pp. e1008621
Author(s):  
Barbara Feulner ◽  
Claudia Clopath

Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.


2016 ◽  
Author(s):  
Ming Bo Cai ◽  
Nicolas W. Schuck ◽  
Jonathan Pillow ◽  
Yael Niv

Abstract1In neuroscience, the similarity matrix of neural activity patterns in response to different sensory stimuli or under different cognitive states reflects the structure of neural representational space. Existing methods derive point estimations of neural activity patterns from noisy neural imaging data, and the similarity is calculated from these point estimations. We show that this approach translates structured noise from estimated patterns into spurious bias structure in the resulting similarity matrix, which is especially severe when signal-to-noise ratio is low and experimental conditions cannot be fully randomized in a cognitive task. We propose an alternative Bayesian framework for computing representational similarity in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data, and directly estimate this covariance structure from imaging data while marginalizing over the unknown activity patterns. Converting the estimated covariance structure into a correlation matrix offers an unbiased estimate of neural representational similarity. Our method can also simultaneously estimate a signal-to-noise map that informs where the learned representational structure is supported more strongly, and the learned covariance matrix can be used as a structured prior to constrain Bayesian estimation of neural activity patterns.


2018 ◽  
Author(s):  
Ming Bo Cai ◽  
Nicolas W. Schuck ◽  
Jonathan W. Pillow ◽  
Yael Niv

AbstractThe activity of neural populations in the brains of humans and animals can exhibit vastly different spatial patterns when faced with different tasks or environmental stimuli. The degree of similarity between these neural activity patterns in response to different events is used to characterize the representational structure of cognitive states in a neural population. The dominant methods of investigating this similarity structure first estimate neural activity patterns from noisy neural imaging data using linear regression, and then examine the similarity between the estimated patterns. Here, we show that this approach introduces spurious bias structure in the resulting similarity matrix, in particular when applied to fMRI data. This problem is especially severe when the signal-to-noise ratio is low and in cases where experimental conditions cannot be fully randomized in a task. We propose Bayesian Representational Similarity Analysis (BRSA), an alternative method for computing representational similarity, in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data. By marginalizing over the unknown activity patterns, we can directly estimate this covariance structure from imaging data. This method offers significant reductions in bias and allows estimation of neural representational similarity with previously unattained levels of precision at low signal-to-noise ratio. The probabilistic framework allows for jointly analyzing data from a group of participants. The method can also simultaneously estimate a signal-to-noise ratio map that shows where the learned representational structure is supported more strongly. Both this map and the learned covariance matrix can be used as a structured prior for maximum a posteriori estimation of neural activity patterns, which can be further used for fMRI decoding. We make our tool freely available in Brain Imaging Analysis Kit (BrainIAK).Author summaryWe show the severity of the bias introduced when performing representational similarity analysis (RSA) based on neural activity pattern estimated within imaging runs. Our Bayesian RSA method significantly reduces the bias and can learn a shared representational structure across multiple participants. We also demonstrate its extension as a new multi-class decoding tool.


2020 ◽  
Author(s):  
Mohammad R. Rezaei ◽  
Alex E. Hadjinicolaou ◽  
Sydney S. Cash ◽  
Uri T. Eden ◽  
Ali Yousefi

AbstractThe Bayesian state-space neural encoder-decoder modeling framework is an established solution to reveal how changes in brain dynamics encode physiological covariates like movement or cognition. Although the framework is increasingly being applied to progress the field of neuroscience, its application to modeling high-dimensional neural data continues to be a challenge. Here, we propose a novel solution that avoids the complexity of encoder models that characterize high-dimensional data as a function of the underlying state processes. We build a discriminative model to estimate state processes as a function of current and previous observations of neural activity. We then develop the filter and parameter estimation solutions for this new class of state-space modeling framework called the “direct decoder” model. We apply the model to decode movement trajectories of a rat in a W-shaped maze from the ensemble spiking activity of place cells and achieve comparable performance to modern decoding solutions, without needing an encoding step in the model development. We further demonstrate how a dynamical auto-encoder can be built using the direct decoder model; here, the underlying state process links the high-dimensional neural activity to the behavioral readout. The dynamical auto-encoder can optimally estimate the low-dimensional dynamical manifold which represents the relationship between brain and behavior.


eLife ◽  
2014 ◽  
Vol 3 ◽  
Author(s):  
Yong Gu ◽  
Dora E Angelaki ◽  
Gregory C DeAngelis

Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles.


2019 ◽  
Author(s):  
Albert K. You ◽  
Bing Liu ◽  
Abhimanyu Singhal ◽  
Suraj Gowda ◽  
Helene Moorman ◽  
...  

SUMMARYOne hallmark of natural motor control is the brain’s ability to adapt to perturbations ranging from temporary visual-motor rotations to paresis caused by stroke. These adaptations require modifications of learned neural patterns that can span the time-course of minutes to months. Previous work with brain-machine interfaces (BMI) has shown that over learning, neurons consolidate firing activity onto low-dimensional neural subspaces, and additional studies have shown that neurons require longer timescales to adapt to task perturbations that require neural activity outside of these subspaces. However, it is unclear how the motor cortex adapts alongside task changes that do not require modifications of the existing neural subspace over learning. To answer this question, five nonhuman primates were used in three BMI experiments, which allowed us to track how specific populations of neurons changed firing patterns as task performance improved. In each experiment, neural activity was transformed into cursor kinematics using decoding algorithms that were periodically readapted based on natural arm movements or visual feedback. We found that decoder changes caused neurons to increase exploratory-like patterns on within-day timescales without hindering previously consolidated patterns regardless of task performance. The flexible modulation of these exploratory patterns in contrast to relatively stable consolidated activity suggests a simultaneous exploration-exploitation strategy that adapts existing neural patterns during learning.


Sign in / Sign up

Export Citation Format

Share Document