scholarly journals Stable thalamocortical learning between medial-dorsal thalamus and cortical attractor networks captures cognitive flexibility

2021 ◽  
Author(s):  
Siwei Qiu

AbstractPrimates and rodents are able to continually acquire, adapt, and transfer knowledge and skill, and lead to goal-directed behavior during their lifespan. For the case when context switches slowly, animals learn via slow processes. For the case when context switches rapidly, animals learn via fast processes. We build a biologically realistic model with modules similar to a distributed computing system. Specifically, we are emphasizing the role of thalamocortical learning on a slow time scale between the prefrontal cortex (PFC) and medial dorsal thalamus (MD). Previous work [1] has already shown experimental evidence supporting classification of cell ensembles in the medial dorsal thalamus, where each class encodes a different context. However, the mechanism by which such classification is learned is not clear. In this work, we show that such learning can be self-organizing in the manner of an automaton (a distributed computing system), via a combination of Hebbian learning and homeostatic synaptic scaling. We show that in the simple case of two contexts, the network with hierarchical structure can do context-based decision making and smooth switching between different contexts. Our learning rule creates synaptic competition [2] between the thalamic cells to create winner-take-all activity. Our theory shows that the capacity of such a learning process depends on the total number of task-related hidden variables, and such a capacity is limited by system size N. We also theoretically derived the effective functional connectivity as a function of an order parameter dependent on the thalamo-cortical coupling structure.Significance StatementAnimals need to adapt to dynamically changing environments and make decisions based on changing contexts. Here we propose a combination of neural circuit structure with learning mechanisms to account for such behaviors. Specifically, we built a reservoir computing network improved by a Hebbian learning rule together with a synaptic scaling learning mechanism between the prefrontal cortex and the medial-dorsal (MD) thalamus. This model shows that MD thalamus is crucial in such context-based decision making. I also make use of dynamical mean field theory to predict the effective neural circuit. Furthermore, theoretical analysis provides a prediction that the capacity of such a network increases with the network size and the total number of tasks-related latent variables.

2010 ◽  
Vol 22 (6) ◽  
pp. 1399-1444 ◽  
Author(s):  
Michael Pfeiffer ◽  
Bernhard Nessler ◽  
Rodney J. Douglas ◽  
Wolfgang Maass

We introduce a framework for decision making in which the learning of decision making is reduced to its simplest and biologically most plausible form: Hebbian learning on a linear neuron. We cast our Bayesian-Hebb learning rule as reinforcement learning in which certain decisions are rewarded and prove that each synaptic weight will on average converge exponentially fast to the log-odd of receiving a reward when its pre- and postsynaptic neurons are active. In our simple architecture, a particular action is selected from the set of candidate actions by a winner-take-all operation. The global reward assigned to this action then modulates the update of each synapse. Apart from this global reward signal, our reward-modulated Bayesian Hebb rule is a pure Hebb update that depends only on the coactivation of the pre- and postsynaptic neurons, not on the weighted sum of all presynaptic inputs to the postsynaptic neuron as in the perceptron learning rule or the Rescorla-Wagner rule. This simple approach to action-selection learning requires that information about sensory inputs be presented to the Bayesian decision stage in a suitably preprocessed form resulting from other adaptive processes (acting on a larger timescale) that detect salient dependencies among input features. Hence our proposed framework for fast learning of decisions also provides interesting new hypotheses regarding neural nodes and computational goals of cortical areas that provide input to the final decision stage.


2017 ◽  
Author(s):  
Grace W. Lindsay ◽  
Mattia Rigotti ◽  
Melissa R. Warden ◽  
Earl K. Miller ◽  
Stefano Fusi

AbstractComplex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by prefrontal cortex (PFC). Neural activity in PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear ‘mixed’ selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which PFC exhibits computationally relevant properties such as mixed selectivity and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data shows significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and allows the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results give intuition about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training.Significance StatementPrefrontal cortex (PFC) is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (”mixed selectivity”)—is a topic of interest. Despite the fact that models with random feedforward connectivity are capable of creating computationally-relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training.


2021 ◽  
pp. 1-33
Author(s):  
Kevin Berlemont ◽  
Jean-Pierre Nadal

Abstract In experiments on perceptual decision making, individuals learn a categorization task through trial-and-error protocols. We explore the capacity of a decision-making attractor network to learn a categorization task through reward-based, Hebbian-type modifications of the weights incoming from the stimulus encoding layer. For the latter, we assume a standard layer of a large number of stimu lus-specific neurons. Within the general framework of Hebbian learning, we have hypothesized that the learning rate is modulated by the reward at each trial. Surprisingly, we find that when the coding layer has been optimized in view of the categorization task, such reward-modulated Hebbian learning (RMHL) fails to extract efficiently the category membership. In previous work, we showed that the attractor neural networks' nonlinear dynamics accounts for behavioral confidence in sequences of decision trials. Taking advantage of these findings, we propose that learning is controlled by confidence, as computed from the neural activity of the decision-making attractor network. Here we show that this confidence-controlled, reward-based Hebbian learning efficiently extracts categorical information from the optimized coding layer. The proposed learning rule is local and, in contrast to RMHL, does not require storing the average rewards obtained on previous trials. In addition, we find that the confidence-controlled learning rule achieves near-optimal performance. In accordance with this result, we show that the learning rule approximates a gradient descent method on a maximizing reward cost function.


2020 ◽  
Author(s):  
Kevin Berlemont ◽  
Jean-Pierre Nadal

AbstractIn experiments on perceptual decision-making, individuals learn a categorization task through trial-and-error protocols. We explore the capacity of a decision-making attractor network to learn a categorization task through reward-based, Hebbian type, modifications of the weights incoming from the stimulus encoding layer. For the latter, we assume a standard layer of a large number of stimulus specific neurons. Within the general framework of Hebbian learning, authors have hypothesized that the learning rate is modulated by the reward at each trial. Surprisingly, we find that, when the coding layer has been optimized in view of the categorization task, such reward-modulated Hebbian learning (RMHL) fails to extract efficiently the category membership. In a previous work we showed that the attractor neural networks nonlinear dynamics accounts for behavioral confidence in sequences of decision trials. Taking advantage of these findings, we propose that learning is controlled by confidence, as computed from the neural activity of the decision-making attractor network. Here we show that this confidence-controlled, reward-based, Hebbian learning efficiently extracts categorical information from the optimized coding layer. The proposed learning rule is local, and, in contrast to RMHL, does not require to store the average rewards obtained on previous trials. In addition, we find that the confidence-controlled learning rule achieves near optimal performance.


Author(s):  
Lee Peyton ◽  
Alfredo Oliveros ◽  
Doo-Sup Choi ◽  
Mi-Hyeon Jang

AbstractPsychiatric illness is a prevalent and highly debilitating disorder, and more than 50% of the general population in both middle- and high-income countries experience at least one psychiatric disorder at some point in their lives. As we continue to learn how pervasive psychiatric episodes are in society, we must acknowledge that psychiatric disorders are not solely relegated to a small group of predisposed individuals but rather occur in significant portions of all societal groups. Several distinct brain regions have been implicated in neuropsychiatric disease. These brain regions include corticolimbic structures, which regulate executive function and decision making (e.g., the prefrontal cortex), as well as striatal subregions known to control motivated behavior under normal and stressful conditions. Importantly, the corticolimbic neural circuitry includes the hippocampus, a critical brain structure that sends projections to both the cortex and striatum to coordinate learning, memory, and mood. In this review, we will discuss past and recent discoveries of how neurobiological processes in the hippocampus and corticolimbic structures work in concert to control executive function, memory, and mood in the context of mental disorders.


Life ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 310
Author(s):  
Shih-Chia Chang ◽  
Ming-Tsang Lu ◽  
Tzu-Hui Pan ◽  
Chiao-Shan Chen

Although the electronic health (e-health) cloud computing system is a promising innovation, its adoption in the healthcare industry has been slow. This study investigated the adoption of e-health cloud computing systems in the healthcare industry and considered security functions, management, cloud service delivery, and cloud software for e-health cloud computing systems. Although numerous studies have determined factors affecting e-health cloud computing systems, few comprehensive reviews of factors and their relations have been conducted. Therefore, this study investigated the relations between the factors affecting e-health cloud computing systems by using a multiple criteria decision-making technique, in which decision-making trial and evaluation laboratory (DEMATEL), DANP (DEMATEL-based Analytic Network Process), and modified VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje) approaches were combined. The intended level of adoption of an e-health cloud computing system could be determined by using the proposed approach. The results of a case study performed on the Taiwanese healthcare industry indicated that the cloud management function must be primarily enhanced and that cost effectiveness is the most significant factor in the adoption of e-health cloud computing. This result is valuable for allocating resources to decrease performance gaps in the Taiwanese healthcare industry.


Sign in / Sign up

Export Citation Format

Share Document