scholarly journals Recurrent neural network models of multi-area computation underlying decision-making

2019 ◽  
Author(s):  
Michael Kleinman ◽  
Chandramouli Chandrasekaran ◽  
Jonathan C. Kao

AbstractCognition emerges from coordinated computations across multiple brain areas. However, elucidating these computations within and across brain regions is challenging because intra- and inter-area connectivity are typically unknown. To study coordinated computation, we trained multi-area recurrent neural networks (RNNs) to discriminate the dominant color of a checker-board and output decision variables reflecting a direction decision, a task previously used to investigate decision-related dynamics in dorsal premotor cortex (PMd) of monkeys. We found that multi-area RNNs, trained with neurophysiological connectivity constraints and Dale’s law, recapitulated decision-related dynamics observed in PMd. The RNN solved this task by a dynamical mechanism where the direction decision was computed and outputted, via precisely oriented dynamics, on an axis that was nearly orthogonal to checkerboard color inputs. This orthogonal direction information was preferentially propagated through alignment with inter-area connections; in contrast, color information was filtered. These results suggest that cortex uses modular computation to generate minimal sufficient representations of task information. Finally, we used multi-area RNNs to produce experimentally testable hypotheses for computations that occur within and across multiple brain areas, enabling new insights into distributed computation in neural systems.

2013 ◽  
Vol 2013 ◽  
pp. 1-18 ◽  
Author(s):  
Seth A. Herd ◽  
Kai A. Krueger ◽  
Trenton E. Kriete ◽  
Tsung-Ren Huang ◽  
Thomas E. Hazy ◽  
...  

We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.


2021 ◽  
Author(s):  
Weinan Sun ◽  
Madhu Advani ◽  
Nelson Spruston ◽  
Andrew Saxe ◽  
James E Fitzgerald

Our ability to remember the past is essential for guiding our future behavior. Psychological and neurobiological features of declarative memories are known to transform over time in a process known as systems consolidation. While many theories have sought to explain the time-varying role of hippocampal and neocortical brain areas, the computational principles that govern these transformations remain unclear. Here we propose a theory of systems consolidation in which hippocampal-cortical interactions serve to optimize generalizations that guide future adaptive behavior. We use mathematical analysis of neural network models to characterize fundamental performance tradeoffs in systems consolidation, revealing that memory components should be organized according to their predictability. The theory shows that multiple interacting memory systems can outperform just one, normatively unifying diverse experimental observations and making novel experimental predictions. Our results suggest that the psychological taxonomy and neurobiological organization of declarative memories reflect a system optimized for behaving well in an uncertain future.


2015 ◽  
Vol 27 (10) ◽  
pp. 1981-1999 ◽  
Author(s):  
Lang Chen ◽  
Timothy T. Rogers

Theories about the neural bases of semantic knowledge tend between two poles, one proposing that distinct brain regions are innately dedicated to different conceptual domains and the other suggesting that all concepts are encoded within a single network. Category-sensitive functional activations in the fusiform cortex of the congenitally blind have been taken to support the former view but also raise several puzzles. We use neural network models to assess a hypothesis that spans the two poles: The interesting functional activation patterns reflect the base connectivity of a domain-general semantic network. Both similarities and differences between sighted and congenitally blind groups can emerge through learning in a neural network, but only in architectures adopting real anatomical constraints. Surprisingly, the same constraints suggest a novel account of a quite different phenomenon: the dyspraxia observed in patients with semantic impairments from anterior temporal pathology. From this work, we suggest that the cortical semantic network is wired not to encode knowledge of distinct conceptual domains but to promote learning about both conceptual and affordance structure in the environment.


2021 ◽  
Author(s):  
Miao Cao ◽  
Daniel Galvis ◽  
Simon Vogrin ◽  
William Woods ◽  
Sara Vogrin ◽  
...  

Abstract Modelling the interactions that arise from neural dynamics in seizure genesis is challenging but important in the effort to improve the success of epilepsy surgery. Dynamical network models developed from physiological evidence offer insights into rapidly evolving brain networks in the epileptic seizure. A major limitation of previous studies in this field is the dependence on invasive cortical recordings with constrained spatial sampling of brain regions that might be involved in seizure dynamics. Here, we propose a novel approach, virtual intracranial electroencephalography (ViEEG), that combines non-invasive ictal magnetoencephalographic imaging (MEG), dynamical network models and a virtual resection technique. In this proof-of-concept study, we show that ViEEG signals reconstructed from MEG alone preserve critical temporospatial characteristics for dynamical approaches to identify brain areas involved in seizure generation. Our findings demonstrate the advantages of non-invasive ViEEG over the current presurgical ‘gold-standard’ – intracranial electroencephalography (iEEG). Our approach promises to optimise the surgical strategy for patients with complex refractory focal epilepsy.


2020 ◽  
Author(s):  
Matthew G. Perich ◽  
Charlotte Arlt ◽  
Sofia Soares ◽  
Megan E. Young ◽  
Clayton P. Mosher ◽  
...  

ABSTRACTBehavior arises from the coordinated activity of numerous anatomically and functionally distinct brain regions. Modern experimental tools allow unprecedented access to large neural populations spanning many interacting regions brain-wide. Yet, understanding such large-scale datasets necessitates both scalable computational models to extract meaningful features of interregion communication and principled theories to interpret those features. Here, we introduce Current-Based Decomposition (CURBD), an approach for inferring brain-wide interactions using data-constrained recurrent neural network models that directly reproduce experimentally-obtained neural data. CURBD leverages the functional interactions inferred by such models to reveal directional currents between multiple brain regions. We first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. We then apply CURBD to multi-region neural recordings obtained from mice during running, macaques during Pavlovian conditioning, and humans during memory retrieval to demonstrate the widespread applicability of CURBD to untangle brain-wide interactions underlying behavior from a variety of neural datasets.


2000 ◽  
Vol 12 (2) ◽  
pp. 433-450 ◽  
Author(s):  
Maxim Khaikine ◽  
Klaus Holthausen

We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that allows studying the convergence of the adaptation process. In particular, the derived algorithm can be applied for approximation problems such as the estimation of probability densitiesor the recognition of regression functions. These approximation algorithms can be easily extended to higher-dimensional cases. Certain neural network models can be derived from our approach (e.g., topological feature maps and associative networks).


2021 ◽  
Author(s):  
Mengting Fang ◽  
Craig Poskanzer ◽  
Stefano Anzellotti

Cognitive tasks engage multiple brain regions. Studying how these regions interact is key to understand the neural bases of cognition. Standard approaches to model the interactions between brain regions rely on univariate statistical dependence. However, newly developed methods can capture multivariate dependence. Multivariate Pattern Dependence (MVPD) is a powerful and flexible approach that trains and tests multivariate models of the interactions between brain regions using independent data. In this article, we introduce PyMVPD: an open source toolbox for Multivariate Pattern Dependence. The toolbox includes pre-implemented linear regression models and artificial neural network models of the interactions between regions. It is designed to be easily customizable. We demonstrate example applications of PyMVPD using well-studied seed regions such as the fusiform face area (FFA) and the parahippocampal place area (PPA). Next, we compare the performance of different model architectures. Overall, artificial neural networks outperform linear regression. Importantly, the best performing architecture is region-dependent: MVPD subdivides cortex in distinct, contiguous regions whose interaction with FFA and PPA is best captured by different models.


2019 ◽  
Vol 116 (43) ◽  
pp. 21854-21863 ◽  
Author(s):  
Tim C. Kietzmann ◽  
Courtney J. Spoerer ◽  
Lynn K. A. Sörensen ◽  
Radoslaw M. Cichy ◽  
Olaf Hauk ◽  
...  

The human visual system is an intricate network of brain regions that enables us to recognize the world around us. Despite its abundant lateral and feedback connections, object processing is commonly viewed and studied as a feedforward process. Here, we measure and model the rapid representational dynamics across multiple stages of the human ventral stream using time-resolved brain imaging and deep learning. We observe substantial representational transformations during the first 300 ms of processing within and across ventral-stream regions. Categorical divisions emerge in sequence, cascading forward and in reverse across regions, and Granger causality analysis suggests bidirectional information flow between regions. Finally, recurrent deep neural network models clearly outperform parameter-matched feedforward models in terms of their ability to capture the multiregion cortical dynamics. Targeted virtual cooling experiments on the recurrent deep network models further substantiate the importance of their lateral and top-down connections. These results establish that recurrent models are required to understand information processing in the human ventral stream.


2000 ◽  
Vol 12 (8) ◽  
pp. 1743-1772 ◽  
Author(s):  
Wolfgang Maass ◽  
Eduardo D. Sontag

Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic; their “weight” changes on a short timescale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics (which should not be confused with long term learning) affects the computational power of a neural network. In particular, we analyze computations on temporal and spatiotemporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer, such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy related to the cost of implementing such filters in neural systems.


2018 ◽  
Author(s):  
Amir Dezfouli ◽  
Richard Morris ◽  
Fabio Ramos ◽  
Peter Dayan ◽  
Bernard W. Balleine

AbstractNeuroscience studies of human decision-making abilities commonly involve sub-jects completing a decision-making task while BOLD signals are recorded using fMRI. Hypotheses are tested about which brain regions mediate the effect of past experience, such as rewards, on future actions. One standard approach to this is model-based fMRI data analysis, in which a model is fitted to the behavioral data, i.e., a subject’s choices, and then the neural data are parsed to find brain regions whose BOLD signals are related to the model’s internal signals. However, the internal mechanics of such purely behavioral models are not constrained by the neural data, and therefore might miss or mischaracterize aspects of the brain. To address this limitation, we introduce a new method using recurrent neural network models that are flexible enough to be jointly fitted to the behavioral and neural data. We trained a model so that its internal states were suitably related to neural activity during the task, while at the same time its output predicted the next action a subject would execute. We then used the fitted model to create a novel visualization of the relationship between the activity in brain regions at different times following a reward and the choices the subject subsequently made. Finally, we validated our method using a previously published dataset. We found that the model was able to recover the underlying neural substrates that were discovered by explicit model engineering in the previous work, and also derived new results regarding the temporal pattern of brain activity.


Sign in / Sign up

Export Citation Format

Share Document