Faculty Opinions recommendation of Adaptive coding of reward value by dopamine neurons.

Author(s):  
Kent Berridge
Science ◽  
2005 ◽  
Vol 307 (5715) ◽  
pp. 1642-1645 ◽  
Author(s):  
P. N. Tobler

2019 ◽  
Vol 31 (10) ◽  
pp. 1443-1454 ◽  
Author(s):  
Jessica K. Stanek ◽  
Kathryn C. Dickerson ◽  
Kimberly S. Chiew ◽  
Nathaniel J. Clement ◽  
R. Alison Adcock

Anticipating rewards has been shown to enhance memory formation. Although substantial evidence implicates dopamine in this behavioral effect, the precise mechanisms remain ambiguous. Because dopamine nuclei have been associated with two distinct physiological signatures of reward prediction, we hypothesized two dissociable effects on memory formation. These two signatures are a phasic dopamine response immediately following a reward cue that encodes its expected value and a sustained, ramping response that has been demonstrated during high reward uncertainty [Fiorillo, C. D., Tobler, P. N., & Schultz, W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science, 299, 1898–1902, 2003]. Here, we show in humans that the impact of reward anticipation on memory for an event depends on its timing relative to these physiological signatures. By manipulating reward probability (100%, 50%, or 0%) and the timing of the event to be encoded (just after the reward cue versus just before expected reward outcome), we demonstrated the predicted double dissociation: Early during reward anticipation, memory formation was improved by increased expected reward value, whereas late during reward anticipation, memory formation was enhanced by reward uncertainty. Notably, although the memory benefits of high expected reward in the early interval were consolidation dependent, the memory benefits of high uncertainty in the later interval were not. These findings support the view that expected reward benefits memory consolidation via phasic dopamine release. The novel finding of a distinct memory enhancement, temporally consistent with sustained anticipatory dopamine release, points toward new mechanisms of memory modulation by reward now ripe for further investigation.


2016 ◽  
Vol 18 (1) ◽  
pp. 23-32 ◽  

Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Armin Lak ◽  
William R Stauffer ◽  
Wolfram Schultz

Economic theories posit reward probability as one of the factors defining reward value. Individuals learn the value of cues that predict probabilistic rewards from experienced reward frequencies. Building on the notion that responses of dopamine neurons increase with reward probability and expected value, we asked how dopamine neurons in monkeys acquire this value signal that may represent an economic decision variable. We found in a Pavlovian learning task that reward probability-dependent value signals arose from experienced reward frequencies. We then assessed neuronal response acquisition during choices among probabilistic rewards. Here, dopamine responses became sensitive to the value of both chosen and unchosen options. Both experiments showed also the novelty responses of dopamine neurones that decreased as learning advanced. These results show that dopamine neurons acquire predictive value signals from the frequency of experienced rewards. This flexible and fast signal reflects a specific decision variable and could update neuronal decision mechanisms.


2010 ◽  
Vol 104 (2) ◽  
pp. 1068-1076 ◽  
Author(s):  
Ethan S. Bromberg-Martin ◽  
Masayuki Matsumoto ◽  
Simon Hong ◽  
Okihide Hikosaka

The reward value of a stimulus can be learned through two distinct mechanisms: reinforcement learning through repeated stimulus-reward pairings and abstract inference based on knowledge of the task at hand. The reinforcement mechanism is often identified with midbrain dopamine neurons. Here we show that a neural pathway controlling the dopamine system does not rely exclusively on either stimulus-reward pairings or abstract inference but instead uses a combination of the two. We trained monkeys to perform a reward-biased saccade task in which the reward values of two saccade targets were related in a systematic manner. Animals used each trial's reward outcome to learn the values of both targets: the target that had been presented and whose reward outcome had been experienced (experienced value) and the target that had not been presented but whose value could be inferred from the reward statistics of the task (inferred value). We then recorded from three populations of reward-coding neurons: substantia nigra dopamine neurons; a major input to dopamine neurons, the lateral habenula; and neurons that project to the lateral habenula, located in the globus pallidus. All three populations encoded both experienced values and inferred values. In some animals, neurons encoded experienced values more strongly than inferred values, and the animals showed behavioral evidence of learning faster from experience than from inference. Our data indicate that the pallidus-habenula-dopamine pathway signals reward values estimated through both experience and inference.


2020 ◽  
Author(s):  
Clarissa M. Liu ◽  
Ted M. Hsu ◽  
Andrea N. Suarez ◽  
Keshav S. Subramanian ◽  
Ryan A. Fatemi ◽  
...  

ABSTRACTOxytocin potently reduces food intake and is a potential target system for obesity treatment. A better understanding of the behavioral and neurobiological mechanisms mediating oxytocin’s anorexigenic effects may guide more effective obesity pharmacotherapy development. The present study examined the effects of central (lateral intracerebroventricular [ICV]) administration of oxytocin in rats on motivated responding for palatable food. Various conditioning procedures were employed to measure distinct appetitive behavioral domains, including food seeking in the absence of consumption (conditioned place preference expression), impulsive responding for food (differential reinforcement of low rates of responding), effort-based appetitive decision making (high-effort palatable vs. low-effort bland food), and postingestive reward value encoding (incentive learning). Results reveal that ICV oxytocin potently reduces food-seeking behavior, impulsivity, and effort-based palatable food choice, yet does not influence encoding of postingestive reward value in the incentive learning task. To investigate a potential neurobiological mechanism mediating these behavioral outcomes, we utilized in vivo fiber photometry in ventral tegmental area (VTA) dopamine neurons to examine oxytocin’s effect on phasic dopamine neuron responses to sucrose-predictive Pavlovian cues. Results reveal that ICV oxytocin significantly reduced food cue-evoked dopamine neuron activity. Collectively, these data reveal that central oxytocin signaling inhibits various obesity-relevant conditioned appetitive behaviors, potentially via reductions in food cue-driven phasic dopamine neural responses in the VTA.HighlightsCentral oxytocin inhibits motivated responding for palatable food reinforcementCentral oxytocin does not play a role in encoding postingestive reward valueCentral oxytocin blunts VTA dopamine neuron activity in response to food cues


2010 ◽  
Vol 68 ◽  
pp. e287
Author(s):  
Kazuki Enomoto ◽  
Naoyuki Matsumoto ◽  
Masahiko Haruno ◽  
Minoru Kimura

Sign in / Sign up

Export Citation Format

Share Document