scholarly journals Model-based hierarchical reinforcement learning and human action control

2014 ◽  
Vol 369 (1655) ◽  
pp. 20130480 ◽  
Author(s):  
Matthew Botvinick ◽  
Ari Weinstein

Recent work has reawakened interest in goal-directed or ‘model-based’ choice, where decisions are based on prospective evaluation of potential action outcomes. Concurrently, there has been growing attention to the role of hierarchy in decision-making and action control. We focus here on the intersection between these two areas of interest, considering the topic of hierarchical model-based control. To characterize this form of action control, we draw on the computational framework of hierarchical reinforcement learning, using this to interpret recent empirical findings. The resulting picture reveals how hierarchical model-based mechanisms might play a special and pivotal role in human decision-making, dramatically extending the scope and complexity of human behaviour.

2020 ◽  
Author(s):  
Clay B. Holroyd ◽  
Tom Verguts

Despite continual debate for the past thirty years about the function of anterior cingulate cortex (ACC), its key contribution to neurocognition remains unknown. Here we review computational models that illustrate three core principles of ACC function (related to hierarchy, world models and cost), as well as four constraints on the neural implementation of these principles (related to modularity, binding, encoding and learning and regulation). These observations suggest a role for ACC in hierarchical model-based hierarchical reinforcement learning, which instantiates a mechanism for motivating the execution of high-level plans.


Author(s):  
Hartwig Steusloff ◽  
Michael Decker

Extremely complex systems like the smart grid or autonomous cars need to meet society's high expectations regarding their safe operation. The human designer and operator becomes a “system component” as soon as responsible decision making is needed. Tacit knowledge and other human properties are of crucial relevance for situation-dependent decisions. The uniform modeling of technical systems and humans will benefit from ethical reflection. In this chapter, we describe human action with technical means and ask, on the one hand, for a comprehensive multidisciplinary technology assessment in order to produce supporting knowledge and methods for technical and societal decision making. On the other hand—and here is the focus—we propose a system life cycle approach which integrates the human in the loop and argue that it can be worthwhile to describe humans in a technical way in order to implement human decision making by means of the use case method. Ethical reflection and even ethically based technical decision making can support the effective control of convergent technology systems.


2021 ◽  
Author(s):  
Daoming Lyu ◽  
Fangkai Yang ◽  
Hugh Kwon ◽  
Bo Liu ◽  
Wen Dong ◽  
...  

Human-robot interactive decision-making is increasingly becoming ubiquitous, and explainability is an influential factor in determining the reliance on autonomy. However, it is not reasonable to trust systems beyond our comprehension, and typical machine learning and data-driven decision-making are black-box paradigms that impede explainability. Therefore, it is critical to establish computational efficient decision-making mechanisms enhanced by explainability-aware strategies. To this end, we propose the Trustworthy Decision-Making (TDM), which is an explainable neuro-symbolic approach by integrating symbolic planning into hierarchical reinforcement learning. The framework of TDM enables the subtask-level explainability from the causal relational and understandable subtasks. Besides, TDM also demonstrates the advantage of the integration between symbolic planning and reinforcement learning, reaping the benefits of both worlds. Experimental results validate the effectiveness of proposed method while improving the explainability in the process of decision-making.


2020 ◽  
Vol 4 (3) ◽  
pp. 294-307 ◽  
Author(s):  
Ji-An Li ◽  
Daoyi Dong ◽  
Zhengde Wei ◽  
Ying Liu ◽  
Yu Pan ◽  
...  

2020 ◽  
Author(s):  
Milena Rmus ◽  
Samuel McDougle ◽  
Anne Collins

Reinforcement learning (RL) models have advanced our understanding of how animals learn and make decisions, and how the brain supports some aspects of learning. However, the neural computations that are explained by RL algorithms fall short of explaining many sophisticated aspects of human decision making, including the generalization of learned information, one-shot learning, and the synthesis of task information in complex environments. Instead, these aspects of instrumental behavior are assumed to be supported by the brain’s executive functions (EF). We review recent findings that highlight the importance of EF in learning. Specifically, we advance the theory that EF sets the stage for canonical RL computations in the brain, providing inputs that broaden their flexibility and applicability. Our theory has important implications for how to interpret RL computations in the brain and behavior.


2017 ◽  
Vol 1 ◽  
pp. 24-57 ◽  
Author(s):  
Woo-Young Ahn ◽  
Nathaniel Haines ◽  
Lei Zhang

Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations.


Sign in / Sign up

Export Citation Format

Share Document