Fast and Slow Learning in Human-Like Intelligence

2021 ◽  
pp. 316-337
Author(s):  
Denis Mareschal ◽  
Sam Blakeman

In this chapter we review the extent to which rapid one-short learning or fast-mapping exists in human learning. We find that it exists in both children and adults, but that it is almost always accompanied by slow consolidated learning in which new knowledge is integrated with existing knowledge-bases. Rapid learning is also present in a broad range of non-human species, particularly in the context of high reward values. We argue that reward prediction errors guide the extent to which fast or slow learning dominates, and present a Complementary Learning Systems neural network model (CTDL) of cortical/hippocampal learning that uses reward prediction errors to adjudicate between learning in the two systems. Developing human-like artificial intelligence will require implementing multiple learning and inference systems governed by a flexible control system with an equal capacity to that of human control systems.

2020 ◽  
Author(s):  
Kate Ergo ◽  
Luna De Vilder ◽  
Esther De Loof ◽  
Tom Verguts

Recent years have witnessed a steady increase in the number of studies investigating the role of reward prediction errors (RPEs) in declarative learning. Specifically, in several experimental paradigms RPEs drive declarative learning; with larger and more positive RPEs enhancing declarative learning. However, it is unknown whether this RPE must derive from the participant’s own response, or whether instead any RPE is sufficient to obtain the learning effect. To test this, we generated RPEs in the same experimental paradigm where we combined an agency and a non-agency condition. We observed no interaction between RPE and agency, suggesting that any RPE (irrespective of its source) can drive declarative learning. This result holds implications for declarative learning theory.


2021 ◽  
Author(s):  
Joseph Heffner ◽  
Jae-Young Son ◽  
Oriel FeldmanHall

People make decisions based on deviations from expected outcomes, known as prediction errors. Past work has focused on reward prediction errors, largely ignoring violations of expected emotional experiences—emotion prediction errors. We leverage a new method to measure real-time fluctuations in emotion as people decide to punish or forgive others. Across four studies (N=1,016), we reveal that emotion and reward prediction errors have distinguishable contributions to choice, such that emotion prediction errors exert the strongest impact during decision-making. We additionally find that a choice to punish or forgive can be decoded in less than a second from an evolving emotional response, suggesting emotions swiftly influence choice. Finally, individuals reporting significant levels of depression exhibit selective impairments in using emotion—but not reward—prediction errors. Evidence for emotion prediction errors potently guiding social behaviors challenge standard decision-making models that have focused solely on reward.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Hideyuki Matsumoto ◽  
Ju Tian ◽  
Naoshige Uchida ◽  
Mitsuko Watabe-Uchida

Dopamine is thought to regulate learning from appetitive and aversive events. Here we examined how optogenetically-identified dopamine neurons in the lateral ventral tegmental area of mice respond to aversive events in different conditions. In low reward contexts, most dopamine neurons were exclusively inhibited by aversive events, and expectation reduced dopamine neurons’ responses to reward and punishment. When a single odor predicted both reward and punishment, dopamine neurons’ responses to that odor reflected the integrated value of both outcomes. Thus, in low reward contexts, dopamine neurons signal value prediction errors (VPEs) integrating information about both reward and aversion in a common currency. In contrast, in high reward contexts, dopamine neurons acquired a short-latency excitation to aversive events that masked their VPE signaling. Our results demonstrate the importance of considering the contexts to examine the representation in dopamine neurons and uncover different modes of dopamine signaling, each of which may be adaptive for different environments.


2017 ◽  
Vol 129 ◽  
pp. 265-272 ◽  
Author(s):  
Chad C. Williams ◽  
Cameron D. Hassall ◽  
Robert Trska ◽  
Clay B. Holroyd ◽  
Olave E. Krigolson

2020 ◽  
Vol 22 (8) ◽  
pp. 849-859
Author(s):  
Julian Macoveanu ◽  
Hanne L. Kjærstad ◽  
Henry W. Chase ◽  
Sophia Frangou ◽  
Gitte M. Knudsen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document