Methamphetamine Exposure During Development Causes Lasting Changes to Mesolimbic Dopamine Signaling in Mice

Author(s):  
Daniel J. Torres ◽  
Jordan T. Yorgason ◽  
Marilou A. Andres ◽  
Frederick P. Bellinger
2014 ◽  
Vol 39 (10) ◽  
pp. 2441-2449 ◽  
Author(s):  
Sean B Ostlund ◽  
Kimberly H LeBlanc ◽  
Alisa R Kosheleff ◽  
Kate M Wassum ◽  
Nigel T Maidment

2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Anne L. Collins ◽  
Venuz Y. Greenfield ◽  
Jeffrey K. Bye ◽  
Kay E. Linker ◽  
Alice S. Wang ◽  
...  

2017 ◽  
Vol 23 (5) ◽  
pp. 1032-1045 ◽  
Author(s):  
David L. Bernstein ◽  
Preeti S. Badve ◽  
Jessica R. Barson ◽  
Caroline E. Bass ◽  
Rodrigo A. España

2021 ◽  
Author(s):  
Luke T Coddington ◽  
Sarah E Lindo ◽  
Joshua T Dudman

Recent success in training artificial agents and robots derives from a combination of direct learning of behavioral policies and indirect learning via value functions. Policy learning and value learning employ distinct algorithms that depend upon evaluation of errors in performance and reward prediction errors, respectively. In animals, behavioral learning and the role of mesolimbic dopamine signaling have been extensively evaluated with respect to reward prediction errors; however, to date there has been little consideration of how direct policy learning might inform our understanding. Here we used a comprehensive dataset of orofacial and body movements to reveal how behavioral policies evolve as naive, head-restrained mice learned a trace conditioning paradigm. Simultaneous multi-regional measurement of dopamine activity revealed that individual differences in initial reward responses robustly predicted behavioral policy hundreds of trials later, but not variation in reward prediction error encoding. These observations were remarkably well matched to the predictions of a neural network based model of behavioral policy learning. This work provides strong evidence that phasic dopamine activity regulates policy learning from performance errors in addition to its roles in value learning and further expands the explanatory power of reinforcement learning models for animal learning.


Sign in / Sign up

Export Citation Format

Share Document