scholarly journals Striatal dynamics explain duration judgments

2015 ◽  
Author(s):  
Thiago S. Gouvêa ◽  
Tiago Monteiro ◽  
Asma Motiwala ◽  
Sofia Soares ◽  
Christian K. Machens ◽  
...  

The striatum is an input structure of the basal ganglia implicated in several time-dependent functions including reinforcement learning, decision making, and interval timing. To determine whether striatal ensembles drive subjects' judgments of duration, we manipulated and recorded from striatal neurons in rats performing a duration categorization psychophysical task. We found that the dynamics of striatal neurons predicted duration judgments, and that simultaneously recorded ensembles could judge duration as well as the animal. Furthermore, striatal neurons were necessary for duration judgments, as muscimol infusions produced a specific impairment in animals' duration sensitivity. Lastly, we show that time as encoded by striatal populations ran faster or slower when rats judged a duration as longer or shorter, respectively. These results demonstrate that the speed with which striatal population state changes supports the fundamental ability of animals to judge the passage of time.

eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Thiago S Gouvêa ◽  
Tiago Monteiro ◽  
Asma Motiwala ◽  
Sofia Soares ◽  
Christian Machens ◽  
...  

The striatum is an input structure of the basal ganglia implicated in several time-dependent functions including reinforcement learning, decision making, and interval timing. To determine whether striatal ensembles drive subjects' judgments of duration, we manipulated and recorded from striatal neurons in rats performing a duration categorization psychophysical task. We found that the dynamics of striatal neurons predicted duration judgments, and that simultaneously recorded ensembles could judge duration as well as the animal. Furthermore, striatal neurons were necessary for duration judgments, as muscimol infusions produced a specific impairment in animals' duration sensitivity. Lastly, we show that time as encoded by striatal populations ran faster or slower when rats judged a duration as longer or shorter, respectively. These results demonstrate that the speed with which striatal population state changes supports the fundamental ability of animals to judge the passage of time.


2017 ◽  
Author(s):  
Rafal Bogacz

AbstractThis paper proposes how the neural circuits in vertebrates select actions on the basis of past experience and the current motivational state. According to the presented theory, the basal ganglia evaluate the utility of considered actions by combining the positive consequences (e.g. nutrition) scaled by the motivational state (e.g. hunger) with the negative consequences (e.g. effort). The theory suggests how the basal ganglia compute utility by combining the positive and negative consequences encoded in the synaptic weights of striatal Go and No-Go neurons, and the motivational state carried by neuromodulators including dopamine. Furthermore, the theory suggests how the striatal neurons to learn separately about consequences of actions, and how the dopaminergic neurons themselves learn what level of activity they need to produce to optimize behaviour. The theory accounts for the effects of dopaminergic modulation on behaviour, patterns of synaptic plasticity in striatum, and responses of dopaminergic neurons in diverse situations.


2011 ◽  
Vol 23 (1) ◽  
pp. 151-167 ◽  
Author(s):  
Ahmed A. Moustafa ◽  
Mark A. Gluck

Most existing models of dopamine and learning in Parkinson disease (PD) focus on simulating the role of basal ganglia dopamine in reinforcement learning. Much data argue, however, for a critical role for prefrontal cortex (PFC) dopamine in stimulus selection in attentional learning. Here, we present a new computational model that simulates performance in multicue category learning, such as the “weather prediction” task. The model addresses how PD and dopamine medications affect stimulus selection processes, which mediate reinforcement learning. In this model, PFC dopamine is key for attentional learning, whereas basal ganglia dopamine, consistent with other models, is key for reinforcement and motor learning. The model assumes that competitive dynamics among PFC neurons is the neural mechanism underlying stimulus selection with limited attentional resources, whereas competitive dynamics among striatal neurons is the neural mechanism underlying action selection. According to our model, PD is associated with decreased phasic and tonic dopamine levels in both PFC and basal ganglia. We assume that dopamine medications increase dopamine levels in both the basal ganglia and PFC, which, in turn, increase tonic dopamine levels but decrease the magnitude of phasic dopamine signaling in these brain structures. Increase of tonic dopamine levels in the simulated PFC enhances attentional shifting performance. The model provides a mechanistic account for several phenomena, including (a) medicated PD patients are more impaired at multicue probabilistic category learning than unmedicated patients and (b) medicated PD patients opt out of reversal when there are alternative and redundant cue dimensions.


2019 ◽  
Vol 121 (5) ◽  
pp. 1748-1760 ◽  
Author(s):  
John G. Mikhael ◽  
Samuel J. Gershman

The modulation of interval timing by dopamine (DA) has been well established over decades of research. The nature of this modulation, however, has remained controversial: Although the pharmacological evidence has largely suggested that time intervals are overestimated with higher DA levels, more recent optogenetic work has shown the opposite effect. In addition, a large body of work has asserted DA’s role as a “reward prediction error” (RPE), or a teaching signal that allows the basal ganglia to learn to predict future rewards in reinforcement learning tasks. Whether these two seemingly disparate accounts of DA may be related has remained an open question. By taking a reinforcement learning-based approach to interval timing, we show here that the RPE interpretation of DA naturally extends to its role as a modulator of timekeeping and furthermore that this view reconciles the seemingly conflicting observations. We derive a biologically plausible, DA-dependent plasticity rule that can modulate the rate of timekeeping in either direction and whose effect depends on the timing of the DA signal itself. This bidirectional update rule can account for the results from pharmacology and optogenetics as well as the behavioral effects of reward rate on interval timing and the temporal selectivity of striatal neurons. Hence, by adopting a single RPE interpretation of DA, our results take a step toward unifying computational theories of reinforcement learning and interval timing. NEW & NOTEWORTHY How does dopamine (DA) influence interval timing? A large body of pharmacological evidence has suggested that DA accelerates timekeeping mechanisms. However, recent optogenetic work has shown exactly the opposite effect. In this article, we relate DA’s role in timekeeping to its most established role, as a critical component of reinforcement learning. This allows us to derive a neurobiologically plausible framework that reconciles a large body of DA’s temporal effects, including pharmacological, behavioral, electrophysiological, and optogenetic.


2016 ◽  
Author(s):  
Kyle Dunovan ◽  
Timothy Verstynen

AbstractThe flexibility of behavioral control is a testament to the brain’s capacity for dynamically resolving uncertainty during goal-directed actions. This ability to select actions and learn from immediate feedback is driven by the dynamics of basal ganglia (BG) pathways. A growing body of empirical evidence conflicts with the traditional view that these pathways act as independent levers for facilitating (i.e., direct pathway) or suppressing (i.e., indirect pathway) motor output, suggesting instead that they engage in a dynamic competition during action decisions that computationally captures action uncertainty. Here we discuss the utility of encoding action uncertainty as a dynamic competition between opposing control pathways and provide evidence that this simple mechanism may have powerful implications for bridging neurocomputational theories of decision making and reinforcement learning.


2011 ◽  
Vol 23 (4) ◽  
pp. 817-851 ◽  
Author(s):  
Rafal Bogacz ◽  
Tobias Larsen

This article seeks to integrate two sets of theories describing action selection in the basal ganglia: reinforcement learning theories describing learning which actions to select to maximize reward and decision-making theories proposing that the basal ganglia selects actions on the basis of sensory evidence accumulated in the cortex. In particular, we present a model that integrates the actor-critic model of reinforcement learning and a model assuming that the cortico-basal-ganglia circuit implements a statistically optimal decision-making procedure. The values of corico-striatal weights required for optimal decision making in our model differ from those provided by standard reinforcement learning models. Nevertheless, we show that an actor-critic model converges to the weights required for optimal decision making when biologically realistic limits on synaptic weights are introduced. We also describe the model's predictions concerning reaction times and neural responses during learning, and we discuss directions required for further integration of reinforcement learning and optimal decision-making theories.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Batel Yifrah ◽  
Ayelet Ramaty ◽  
Genela Morris ◽  
Avi Mendelsohn

AbstractDecision making can be shaped both by trial-and-error experiences and by memory of unique contextual information. Moreover, these types of information can be acquired either by means of active experience or by observing others behave in similar situations. The interactions between reinforcement learning parameters that inform decision updating and memory formation of declarative information in experienced and observational learning settings are, however, unknown. In the current study, participants took part in a probabilistic decision-making task involving situations that either yielded similar outcomes to those of an observed player or opposed them. By fitting alternative reinforcement learning models to each subject, we discerned participants who learned similarly from experience and observation from those who assigned different weights to learning signals from these two sources. Participants who assigned different weights to their own experience versus those of others displayed enhanced memory performance as well as subjective memory strength for episodes involving significant reward prospects. Conversely, memory performance of participants who did not prioritize their own experience over others did not seem to be influenced by reinforcement learning parameters. These findings demonstrate that interactions between implicit and explicit learning systems depend on the means by which individuals weigh relevant information conveyed via experience and observation.


Sign in / Sign up

Export Citation Format

Share Document