scholarly journals Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Lars-Lennart Oettl ◽  
Max Scheller ◽  
Carla Filosa ◽  
Sebastian Wieland ◽  
Franziska Haag ◽  
...  
2019 ◽  
Author(s):  
Daniel J. Millman ◽  
Venkatesh N. Murthy

AbstractRodents can successfully learn multiple, novel stimulus-response associations after only a few repetitions when the contingencies predict reward. The circuits modified during such reinforcement learning to support decision making are not known, but the olfactory tubercle (OT) and posterior piriform cortex (pPC) are candidates for decoding reward category from olfactory sensory input and relaying this information to cognitive and motor areas. Here, we show that an explicit representation for reward category emerges in the OT within minutes of learning a novel odor-reward association, whereas the pPC lacks an explicit representation even after weeks of overtraining. The explicit reward category representation in OT is visible in the first sniff (50-100ms) of an odor on each trial, and precedes the motor action. Together, these results suggest that coding of stimulus information required for reward prediction does not occur within olfactory cortex, but rather in circuits involving the olfactory striatum.


2020 ◽  
Author(s):  
Kate Ergo ◽  
Luna De Vilder ◽  
Esther De Loof ◽  
Tom Verguts

Recent years have witnessed a steady increase in the number of studies investigating the role of reward prediction errors (RPEs) in declarative learning. Specifically, in several experimental paradigms RPEs drive declarative learning; with larger and more positive RPEs enhancing declarative learning. However, it is unknown whether this RPE must derive from the participant’s own response, or whether instead any RPE is sufficient to obtain the learning effect. To test this, we generated RPEs in the same experimental paradigm where we combined an agency and a non-agency condition. We observed no interaction between RPE and agency, suggesting that any RPE (irrespective of its source) can drive declarative learning. This result holds implications for declarative learning theory.


2021 ◽  
Vol 46 (6) ◽  
pp. 1487-1501
Author(s):  
Bo Yang ◽  
Yawen Ao ◽  
Ying Liu ◽  
Xuefen Zhang ◽  
Ying Li ◽  
...  

1977 ◽  
Vol 131 (2) ◽  
pp. 303-312 ◽  
Author(s):  
Neil R. Krieger ◽  
John S. Kauer ◽  
Gordon M. Shepherd ◽  
Paul Greengard

1982 ◽  
Vol 8 (6) ◽  
pp. 711-719 ◽  
Author(s):  
Rosalinda Guevara-Aguilar ◽  
Luis Pastor Solano-Flores ◽  
Olga Alejandra Donatti-Albarran ◽  
Hector Ulises Aguilar-Baturoni

2017 ◽  
Vol 37 (14) ◽  
pp. 3789-3798 ◽  
Author(s):  
Dominik R. Bach ◽  
Mkael Symmonds ◽  
Gareth Barnes ◽  
Raymond J. Dolan

Sign in / Sign up

Export Citation Format

Share Document