scholarly journals Parafascicular Thalamic and Orbitofrontal Cortical Inputs to Striatum Represent States for Goal-Directed Action Selection

2021 ◽  
Vol 15 ◽  
Author(s):  
Sandy Stayte ◽  
Amolika Dhungana ◽  
Bryce Vissel ◽  
Laura A. Bradfield

Several lines of evidence accrued over the last 5–10 years have converged to suggest that the parafascicular nucleus of the thalamus and the lateral orbitofrontal cortex each represent or contribute to internal state/context representations that guide action selection in partially observable task situations. In rodents, inactivations of each structure have been found to selectively impair performance in paradigms testing goal-directed action selection, but only when that action selection relies on state representations. Electrophysiological evidence has suggested that each structure achieves this function via inputs onto cholinergic interneurons (CINs) in the dorsomedial striatum. Here, we briefly review these studies, then point to anatomical evidence regarding the afferents of each structure and what they suggest about the specific features that each contribute to internal state representations. Finally, we speculate as to whether this role might be achieved interdependently through direct PF→OFC projections, or through the convergence of independent direct orbitofrontal cortex (OFC) and parafascicular nucleus of the thalamus (PF) inputs onto striatal targets.

2017 ◽  
Vol 81 (4) ◽  
pp. 366-377 ◽  
Author(s):  
Kelsey S. Zimmermann ◽  
John A. Yamin ◽  
Donald G. Rainnie ◽  
Kerry J. Ressler ◽  
Shannon L. Gourley

2019 ◽  
Vol 70 (1) ◽  
pp. 53-76 ◽  
Author(s):  
Melissa J. Sharpe ◽  
Thomas Stalnaker ◽  
Nicolas W. Schuck ◽  
Simon Killcross ◽  
Geoffrey Schoenbaum ◽  
...  

Making decisions in environments with few choice options is easy. We select the action that results in the most valued outcome. Making decisions in more complex environments, where the same action can produce different outcomes in different conditions, is much harder. In such circumstances, we propose that accurate action selection relies on top-down control from the prelimbic and orbitofrontal cortices over striatal activity through distinct thalamostriatal circuits. We suggest that the prelimbic cortex exerts direct influence over medium spiny neurons in the dorsomedial striatum to represent the state space relevant to the current environment. Conversely, the orbitofrontal cortex is argued to track a subject's position within that state space, likely through modulation of cholinergic interneurons.


Open Biology ◽  
2016 ◽  
Vol 6 (12) ◽  
pp. 160229 ◽  
Author(s):  
E. Axel Gorostiza ◽  
Julien Colomb ◽  
Björn Brembs

Like a moth into the flame—phototaxis is an iconic example for innate preferences. Such preferences probably reflect evolutionary adaptations to predictable situations and have traditionally been conceptualized as hard-wired stimulus–response links. Perhaps for that reason, the century-old discovery of flexibility in Drosophila phototaxis has received little attention. Here, we report that across several different behavioural tests, light/dark preference tested in walking is dependent on various aspects of flight. If we temporarily compromise flying ability, walking photopreference reverses concomitantly. Neuronal activity in circuits expressing dopamine and octopamine, respectively, plays a differential role in photopreference, suggesting a potential involvement of these biogenic amines in this case of behavioural flexibility. We conclude that flies monitor their ability to fly, and that flying ability exerts a fundamental effect on action selection in Drosophila . This work suggests that even behaviours which appear simple and hard-wired comprise a value-driven decision-making stage, negotiating the external situation with the animal's internal state, before an action is selected.


2020 ◽  
Vol 124 (2) ◽  
pp. 634-644
Author(s):  
Long Yang ◽  
Sotiris C. Masmanidis

While previous literature shows that both orbitofrontal cortex (OFC) and dorsomedial striatum (DMS) represent information relevant to selecting specific actions, few studies have directly compared neural signals between these areas. Here we compared OFC and DMS dynamics in mice performing a two-alternative choice task. We found that the animal’s choice could be decoded more accurately from DMS population activity. This work provides among the first evidence that OFC and DMS differentially represent information about an animal’s selected action.


2019 ◽  
pp. 105971231989164
Author(s):  
Viet-Hung Dang ◽  
Ngo Anh Vien ◽  
TaeChoong Chung

Learning to make decisions in partially observable environments is a notorious problem that requires a complex representation of controllers. In most work, the controllers are designed as a non-linear mapping from a sequence of temporal observations to actions. These problems can, in principle, be formulated as a partially observable Markov decision process whose policy can be parameterised through the use of recurrent neural networks. In this paper, we will propose an alternative framework that (a) uses the Long-Short-Term-Memory (LSTM) Encoder-Decoder framework to learn an internal state representation for historical observations and then (b) integrates it into existing recurrent policy models to improve the task performance. The LSTM Encoder encodes a history of observations as input into a representation of internal states. The LSTM Decoder can perform two alternative decoding tasks: predicting the same input observation sequence or predicting future observation sequences. The first proposed decoder acts like an auto-encoder that will guide and constrain the learning of a useful internal state for the policy optimisation task. The second proposed decoder decodes the learnt internal state by the encoder to predict future observation sequences. This idea makes the network act like a non-linear predictive state representation model. Both these decoding parts, which introduce constraints to policy representation, will help guide both the policy optimisation problem and latent state representation learning. The integration of representation learning and policy optimisation aims to help learn more complex policies and improve the performance of policy learning tasks.


2011 ◽  
Vol 23 (6) ◽  
pp. 1549-1566 ◽  
Author(s):  
F. Gregory Ashby ◽  
Matthew J. Crossley

An essential component of skill acquisition is learning the environmental conditions in which that skill is relevant. This article proposes and tests a neurobiologically detailed theory of how such learning is mediated. The theory assumes that a key component of this learning is provided by the cholinergic interneurons in the striatum known as tonically active neurons (TANs). The TANs are assumed to exert a tonic inhibitory influence over cortical inputs to the striatum that prevents the execution of any striatal-dependent actions. The TANs learn to pause in rewarding environments, and this pause releases the striatal output neurons from this inhibitory effect, thereby facilitating the learning and expression of striatal-dependent behaviors. When rewards are no longer available, the TANs cease to pause, which protects striatal learning from decay. A computational version of this theory accounts for a variety of single-cell recording data and some classic behavioral phenomena, including fast reacquisition after extinction.


2007 ◽  
Vol 1121 (1) ◽  
pp. 174-192 ◽  
Author(s):  
S. B. OSTLUND ◽  
B. W. BALLEINE

2014 ◽  
Vol 40 (4) ◽  
pp. 1027-1036 ◽  
Author(s):  
Andrew M Swanson ◽  
Amanda G Allen ◽  
Lauren P Shapiro ◽  
Shannon L Gourley

Sign in / Sign up

Export Citation Format

Share Document