scholarly journals The cost of obtaining rewards enhances the reward prediction error signal of midbrain dopamine neurons

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Shingo Tanaka ◽  
John P. O’Doherty ◽  
Masamichi Sakagami
2018 ◽  
Author(s):  
Rachel S. Lee ◽  
Marcelo G. Mattar ◽  
Nathan F. Parker ◽  
Ilana B. Witten ◽  
Nathaniel D. Daw

AbstractAlthough midbrain dopamine (DA) neurons have been thought to primarily encode reward prediction error (RPE), recent studies have also found movement-related DAergic signals. For example, we recently reported that DA neurons in mice projecting to dorsomedial striatum are modulated by choices contralateral to the recording side. Here, we introduce, and ultimately reject, a candidate resolution for the puzzling RPE vs movement dichotomy, by showing how seemingly movement-related activity might be explained by an action-specific RPE. By considering both choice and RPE on a trial-by-trial basis, we find that DA signals are modulated by contralateral choice in a manner that is distinct from RPE, implying that choice encoding is better explained by movement direction. This fundamental separation between RPE and movement encoding may help shed light on the diversity of functions and dysfunctions of the DA system.


2007 ◽  
Vol 97 (4) ◽  
pp. 3036-3045 ◽  
Author(s):  
Signe Bray ◽  
John O'Doherty

Attractive faces can be considered to be a form of visual reward. Previous imaging studies have reported activity in reward structures including orbitofrontal cortex and nucleus accumbens during presentation of attractive faces. Given that these stimuli appear to act as rewards, we set out to explore whether it was possible to establish conditioning in human subjects by pairing presentation of arbitrary affectively neutral stimuli with subsequent presentation of attractive and unattractive faces. Furthermore, we scanned human subjects with functional magnetic resonance imaging (fMRI) while they underwent this conditioning procedure to determine whether a reward-prediction error signal is engaged during learning with attractive faces as is known to be the case for learning with other types of reward such as juice and money. Subjects showed changes in behavioral ratings to the conditioned stimuli (CS) when comparing post- to preconditioning evaluations, notably for those CSs paired with attractive female faces. We used a simple Rescorla-Wagner learning model to generate a reward-prediction error signal and entered this into a regression analysis with the fMRI data. We found significant prediction error-related activity in the ventral striatum during conditioning with attractive compared with unattractive faces. These findings suggest that an arbitrary stimulus can acquire conditioned value by being paired with pleasant visual stimuli just as with other types of reward such as money or juice. This learning process elicits a reward-prediction error signal in a main target structure of dopamine neurons: the ventral striatum. The findings we describe here may provide insights into the neural mechanisms tapped into by advertisers seeking to influence behavioral preferences by repeatedly exposing consumers to simple associations between products and rewarding visual stimuli such as pretty faces.


2017 ◽  
Author(s):  
Matthew P.H. Gardner ◽  
Geoffrey Schoenbaum ◽  
Samuel J. Gershman

AbstractMidbrain dopamine neurons are commonly thought to report a reward prediction error, as hypothesized by reinforcement learning theory. While this theory has been highly successful, several lines of evidence suggest that dopamine activity also encodes sensory prediction errors unrelated to reward. Here we develop a new theory of dopamine function that embraces a broader conceptualization of prediction errors. By signaling errors in both sensory and reward predictions, dopamine supports a form of reinforcement learning that lies between model-based and model-free algorithms. This account remains consistent with current canon regarding the correspondence between dopamine transients and reward prediction errors, while also accounting for new data suggesting a role for these signals in phenomena such as sensory preconditioning and identity unblocking, which ostensibly draw upon knowledge beyond reward predictions.


2018 ◽  
Vol 285 (1891) ◽  
pp. 20181645 ◽  
Author(s):  
Matthew P. H. Gardner ◽  
Geoffrey Schoenbaum ◽  
Samuel J. Gershman

Midbrain dopamine neurons are commonly thought to report a reward prediction error (RPE), as hypothesized by reinforcement learning (RL) theory. While this theory has been highly successful, several lines of evidence suggest that dopamine activity also encodes sensory prediction errors unrelated to reward. Here, we develop a new theory of dopamine function that embraces a broader conceptualization of prediction errors. By signalling errors in both sensory and reward predictions, dopamine supports a form of RL that lies between model-based and model-free algorithms. This account remains consistent with current canon regarding the correspondence between dopamine transients and RPEs, while also accounting for new data suggesting a role for these signals in phenomena such as sensory preconditioning and identity unblocking, which ostensibly draw upon knowledge beyond reward predictions.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 1680 ◽  
Author(s):  
Wolfram Schultz

The latest animal neurophysiology has revealed that the dopamine reward prediction error signal drives neuronal learning in addition to behavioral learning and reflects subjective reward representations beyond explicit contingency. The signal complies with formal economic concepts and functions in real-world consumer choice and social interaction. An early response component is influenced by physical impact, reward environment, and novelty but does not fully code prediction error. Some dopamine neurons are activated by aversive stimuli, which may reflect physical stimulus impact or true aversiveness, but they do not seem to code general negative value or aversive prediction error. The reward prediction error signal is complemented by distinct, heterogeneous, smaller and slower changes reflecting sensory and motor contributors to behavioral activation, such as substantial movement (as opposed to precise motor control), reward expectation, spatial choice, vigor, and motivation. The different dopamine signals seem to defy a simple unifying concept and should be distinguished to better understand phasic dopamine functions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Harry J. Stewardson ◽  
Thomas D. Sambrook

AbstractReinforcement learning in humans and other animals is driven by reward prediction errors: deviations between the amount of reward or punishment initially expected and that which is obtained. Temporal difference methods of reinforcement learning generate this reward prediction error at the earliest time at which a revision in reward or punishment likelihood is signalled, for example by a conditioned stimulus. Midbrain dopamine neurons, believed to compute reward prediction errors, generate this signal in response to both conditioned and unconditioned stimuli, as predicted by temporal difference learning. Electroencephalographic recordings of human participants have suggested that a component named the feedback-related negativity (FRN) is generated when this signal is carried to the cortex. If this is so, the FRN should be expected to respond equivalently to conditioned and unconditioned stimuli. However, very few studies have attempted to measure the FRN’s response to unconditioned stimuli. The present study attempted to elicit the FRN in response to a primary aversive stimulus (electric shock) using a design that varied reward prediction error while holding physical intensity constant. The FRN was strongly elicited, but earlier and more transiently than typically seen, suggesting that it may incorporate other processes than the midbrain dopamine system.


Sign in / Sign up

Export Citation Format

Share Document