Modulating the Use of Multiple Memory Systems in Value-based Decisions with Contextual Novelty

2019 ◽  
Vol 31 (10) ◽  
pp. 1455-1467 ◽  
Author(s):  
Katherine Duncan ◽  
Annika Semmler ◽  
Daphna Shohamy

With multiple learning and memory systems at its disposal, the human brain can represent the past in many ways, from extracting regularities across similar experiences (incremental learning) to storing rich, idiosyncratic details of individual events (episodic memory). The unique information carried by these neurologically distinct forms of memory can bias our behavior in different directions, raising crucial questions about how these memory systems interact to guide choice and the factors that cause one to dominate. Here, we devised a new approach to estimate how decisions are independently influenced by episodic memories and incremental learning. Furthermore, we identified a biologically motivated factor that biases the use of different memory types—the detection of novelty versus familiarity. Consistent with computational models of cholinergic memory modulation, we find that choices are more influenced by episodic memories following the recognition of an unrelated familiar image but more influenced by incrementally learned values after the detection of a novel image. Together this work provides a new behavioral tool enabling the disambiguation of key memory behaviors thought to be supported by distinct neural systems while also identifying a theoretically important and broadly applicable manipulation to bias the arbitration between these two sources of memories.

Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 25
Author(s):  
Mark Burgin ◽  
Eugene Eberbach ◽  
Rao Mikkilineni

Cloud computing makes the necessary resources available to the appropriate computation to improve scaling, resiliency, and the efficiency of computations. This makes cloud computing a new paradigm for computation by upgrading its artificial intelligence (AI) to a higher order. To explore cloud computing using theoretical tools, we use cloud automata as a new model for computation. Higher-level AI requires infusing features of the human brain into AI systems such as incremental learning all the time. Consequently, we propose computational models that exhibit incremental learning without stopping (sentience). These features are inherent in reflexive Turing machines, inductive Turing machines, and limit Turing machines.


2018 ◽  
Author(s):  
Hyojeong Kim ◽  
Margaret L. Schlichting ◽  
Alison R. Preston ◽  
Jarrod A. Lewis-Peacock

AbstractThe human brain constantly anticipates the future based on memories of the past. Encountering a familiar situation reactivates memory of previous encounters which can trigger a prediction of what comes next to facilitate responsiveness. However, a prediction error can lead to pruning of the offending memory, a process that weakens its representation in the brain and leads to forgetting. Our goal in this study was to evaluate whether memories are spared from pruning in situations that allow for more abstract yet reliable predictions. We hypothesized that when the category, but not the identity, of a new stimulus can be anticipated, this will reduce pruning of existing memories and also reduce encoding of the specifics of new memories. Participants viewed a sequence of objects, some of which reappeared multiple times (“cues”), followed always by novel items. Half of the cues were followed by new items from different (unpredictable) categories, while others were followed by new items from a single (predictable) category. Pattern classification of fMRI data was used to identify category-specific predictions after each cue. Pruning was observed only in unpredictable contexts, while encoding of new items suffered more in predictable contexts. These findings demonstrate that how episodic memories are updated is influenced by the reliability of abstract-level predictions in familiar contexts.


Author(s):  
Patricia L Lockwood ◽  
Miriam C Klein-Flügge

Abstract Social neuroscience aims to describe the neural systems that underpin social cognition and behaviour. Over the past decade, researchers have begun to combine computational models with neuroimaging to link social computations to the brain. Inspired by approaches from reinforcement learning theory, which describes how decisions are driven by the unexpectedness of outcomes, accounts of the neural basis of prosocial learning, observational learning, mentalizing and impression formation have been developed. Here we provide an introduction for researchers who wish to use these models in their studies. We consider both theoretical and practical issues related to their implementation, with a focus on specific examples from the field.


Author(s):  
Pranava Bhat

The domain of engineering has always taken inspiration from the biological world. Understanding the functionalities of the human brain is one of the key areas of interest over time and has caused many advancements in the field of computing systems. The computational capability per unit power per unit volume of the human brain exceeds the current best supercomputers. Mimicking the physics of computations used by the nervous system and the brain can bring a paradigm shift to the computing systems. The concept of bridging computing and neural systems can be termed as neuromorphic computing and it is bringing revolutionary changes in the computing hardware. Neuromorphic computing systems have seen swift progress in the past decades. Many organizations have introduced a variety of designs, implementation methodologies and prototype chips. This paper discusses the parameters that are considered in the advanced neuromorphic computing systems and the tradeoffs between them. There have been attempts made to make computer models of neurons. Advancements in the hardware implementation are fuelling the applications in the field of machine learning. This paper presents the applications of these modern computing systems in Machine Learning.


Brain ◽  
1993 ◽  
Vol 116 (4) ◽  
pp. 903-919 ◽  
Author(s):  
D. Perani ◽  
S. Bressi ◽  
S. F. Cappa ◽  
G. Vallar ◽  
M. Alberoni ◽  
...  

2019 ◽  
Author(s):  
Patricia Lockwood ◽  
Miriam Klein-Flugge

Social neuroscience aims to describe the neural systems that underpin social cognition and behaviour. Over the past decade, researchers have begun to combine computational models with neuroimaging to link social computations to the brain. Inspired by approaches from reinforcement learning theory, which describes how decisions are driven by the unexpectedness of outcomes, accounts of the neural basis of prosocial learning, observational learning, mentalising and impression formation have been developed. Here we provide an introduction for researchers who wish to use these models in their studies. We consider both theoretical and practical issues related to their implementation, with a focus on specific examples from the field.


2020 ◽  
pp. 184-211
Author(s):  
Donna L. Korol

Estrogens produce robust yet mixed effects on cognition, at times enhancing learning and memory, at times impairing learning and memory, and still at other times having no measurable effects on learning and memory. When viewed through a multiple memory systems lens, these variable actions of estrogenic compounds are explained in part by the strategies required and the neural systems tapped during learning and memory. Estrogens tend to promote hippocampus-sensitive functions yet impair striatum-sensitive functions through the activation of multiple estrogen receptor subtypes. Thus, there are cognitive costs and benefits resulting from exposures to estrogens that illuminate an important notion: Depleting circulating ovarian hormones is not singularly detrimental to learning but instead can lead to learning improvements depending upon the type of task at hand. Approaching the effects of estrogens on problem-solving from this perspective may provide important insight to the range of cognitive health risks that may accompany menopause and hormone therapies.


Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 25
Author(s):  
Mark Burgin ◽  
Eugene Eberbach ◽  
Rao Mikkilineni

Cloud computing makes the necessary resources available to the appropriate computation to improve scaling, resiliency, and the efficiency of computations. This makes cloud computing a new paradigm for computation by upgrading its artificial intelligence (AI) to a higher order. To explore cloud computing using theoretical tools, we use cloud automata as a new model for computation. Higher-level AI requires infusing features of the human brain into AI systems such as incremental learning all the time. Consequently, we propose computational models that exhibit incremental learning without stopping (sentience). These features are inherent in reflexive Turing machines, inductive Turing machines, and limit Turing machines.


Sign in / Sign up

Export Citation Format

Share Document