scholarly journals A Step-by-Step Tutorial on Active Inference and its Application to Empirical Data

2021 ◽  
Author(s):  
Ryan Smith ◽  
Karl Friston ◽  
Christopher Whyte

The active inference framework, and in particular its recent formulation as a partially observable Markov decision process (POMDP), has gained increasing popularity in recent years as a useful approach for modelling neurocognitive processes. This framework is highly general and flexible in its ability to be customized to model any cognitive process, as well as simulate predicted neuronal responses based on its accompanying neural process theory. It also affords both simulation experiments for proof of principle and behavioral modelling for empirical studies. However, there are limited resources that explain how to build and run these models in practice, which limits their widespread use. Most introductions assume a technical background in programming, mathematics, and machine learning. In this paper we offer a step-by-step tutorial on how to build POMDPs, run simulations using standard MATLAB routines, and fit these models to empirical data. We assume a minimal background in programming and mathematics, thoroughly explain all equations, and provide exemplar scripts that can be customized for both theoretical and empirical studies. Our goal is to provide the reader with the requisite background knowledge and practical tools to apply active inference to their own research. We also provide optional technical sections and several appendices, which offer the interested reader additional technical details. This tutorial should provide the reader with all the tools necessary to use these models and to follow emerging advances in active inference research.

2019 ◽  
Author(s):  
Anthony Guanxun Chen ◽  
David Benrimoh ◽  
Thomas Parr ◽  
Karl J. Friston

AbstractThis paper offers a formal account of policy learning, or habitual behavioural optimisation, under the framework of Active Inference. In this setting, habit formation becomes an autodidactic, experience-dependent process, based upon what the agent sees itself doing. We focus on the effect of environmental volatility on habit formation by simulating artificial agents operating in a partially observable Markov decision process. Specifically, we used a ‘two-step’ maze paradigm, in which the agent has to decide whether to go left or right to secure a reward. We observe that in volatile environments with numerous reward locations, the agents learn to adopt a generalist strategy, never forming a strong habitual behaviour for any preferred maze direction. Conversely, in conservative or static environments, agents adopt a specialist strategy; forming strong preferences for policies that result in approach to a small number of previously-observed reward locations. The pros and cons of the two strategies are tested and discussed. In general, specialization offers greater benefits, but only when contingencies are conserved over time. We consider the implications of this formal (Active Inference) account of policy learning for understanding the relationship between specialisation and habit formation.Author SummaryActive inference is a theoretical framework that formalizes the behaviour of any organism in terms of a single imperative – to minimize surprise. Starting from this principle, we can construct simulations of simple “agents” (artificial organisms) that show the ability to infer causal relationships and learn. Here, we expand upon currently-existing implementations of Active Inference by enabling synthetic agents to optimise the space of behavioural policies that they can pursue. Our results show that by adapting the probabilities of certain action sequences (which may correspond biologically to the phenomenon of synaptic plasticity), and by rejecting improbable sequences (synaptic pruning), the agents can begin to form habits. Furthermore, we have shown our agent’s habit formation to be environment-dependent. Some agents become specialised to a constant environment, while other adopt a more general strategy, each with sensible pros and cons. This work has potential applications in computational psychiatry, including in behavioural phenotyping to better understand disorders.


2019 ◽  
Author(s):  
Ryan Smith ◽  
Sahib Khalsa ◽  
Martin Paulus

AbstractBackgroundAntidepressant medication adherence is among the most important problems in health care worldwide. Interventions designed to increase adherence have largely failed, pointing towards a critical need to better understand the underlying decision-making processes that contribute to adherence. A computational decision-making model that integrates empirical data with a fundamental action selection principle could be pragmatically useful in 1) making individual level predictions about adherence, and 2) providing an explanatory framework that improves our understanding of non-adherence.MethodsHere we formulate a partially observable Markov decision process model based on the active inference framework that can simulate several processes that plausibly influence adherence decisions.ResultsUsing model simulations of the day-to-day decisions to take a prescribed selective serotonin reuptake inhibitor (SSRI), we show that several distinct parameters in the model can influence adherence decisions in predictable ways. These parameters include differences in policy depth (i.e., how far into the future one considers when deciding), decision uncertainty, beliefs about the predictability (stochasticity) of symptoms, beliefs about the magnitude and time course of symptom reductions and side effects, and the strength of medication-taking habits that one has acquired.ConclusionsClarifying these influential factors will be an important first step toward empirically determining which are contributing to non-adherence to antidepressants in individual patients. The model can also be seamlessly extended to simulate adherence to other medications (by incorporating the known symptom reduction and side effect trajectories of those medications), with the potential promise of identifying which medications may be best suited for different patients.


Author(s):  
Debi A. LaPlante ◽  
Heather M. Gray ◽  
Pat M. Williams ◽  
Sarah E. Nelson

Abstract. Aims: To discuss and review the latest research related to gambling expansion. Method: We completed a literature review and empirical comparison of peer reviewed findings related to gambling expansion and subsequent gambling-related changes among the population. Results: Although gambling expansion is associated with changes in gambling and gambling-related problems, empirical studies suggest that these effects are mixed and the available literature is limited. For example, the peer review literature suggests that most post-expansion gambling outcomes (i. e., 22 of 34 possible expansion outcomes; 64.7 %) indicate no observable change or a decrease in gambling outcomes, and a minority (i. e., 12 of 34 possible expansion outcomes; 35.3 %) indicate an increase in gambling outcomes. Conclusions: Empirical data related to gambling expansion suggests that its effects are more complex than frequently considered; however, evidence-based intervention might help prepare jurisdictions to deal with potential consequences. Jurisdictions can develop and evaluate responsible gambling programs to try to mitigate the impacts of expanded gambling.


Author(s):  
Chaochao Lin ◽  
Matteo Pozzi

Optimal exploration of engineering systems can be guided by the principle of Value of Information (VoI), which accounts for the topological important of components, their reliability and the management costs. For series systems, in most cases higher inspection priority should be given to unreliable components. For redundant systems such as parallel systems, analysis of one-shot decision problems shows that higher inspection priority should be given to more reliable components. This paper investigates the optimal exploration of redundant systems in long-term decision making with sequential inspection and repairing. When the expected, cumulated, discounted cost is considered, it may become more efficient to give higher inspection priority to less reliable components, in order to preserve system redundancy. To investigate this problem, we develop a Partially Observable Markov Decision Process (POMDP) framework for sequential inspection and maintenance of redundant systems, where the VoI analysis is embedded in the optimal selection of exploratory actions. We investigate the use of alternative approximate POMDP solvers for parallel and more general systems, compare their computation complexities and performance, and show how the inspection priorities depend on the economic discount factor, the degradation rate, the inspection precision, and the repair cost.


2018 ◽  
Vol 15 (02) ◽  
pp. 1850011 ◽  
Author(s):  
Frano Petric ◽  
Damjan Miklić ◽  
Zdenko Kovačić

The existing procedures for autism spectrum disorder (ASD) diagnosis are often time consuming and tiresome both for highly-trained human evaluators and children, which may be alleviated by using humanoid robots in the diagnostic process. Hence, this paper proposes a framework for robot-assisted ASD evaluation based on partially observable Markov decision process (POMDP) modeling, specifically POMDPs with mixed observability (MOMDPs). POMDP is broadly used for modeling optimal sequential decision making tasks under uncertainty. Spurred by the widely accepted autism diagnostic observation schedule (ADOS), we emulate ADOS through four tasks, whose models incorporate observations of multiple social cues such as eye contact, gestures and utterances. Relying only on those observations, the robot provides an assessment of the child’s ASD-relevant functioning level (which is partially observable) within a particular task and provides human evaluators with readable information by partitioning its belief space. Finally, we evaluate the proposed MOMDP task models and demonstrate that chaining the tasks provides fine-grained outcome quantification, which could also increase the appeal of robot-assisted diagnostic protocols in the future.


Author(s):  
Chuande Liu ◽  
Chuang Yu ◽  
Bingtuan Gao ◽  
Syed Awais Ali Shah ◽  
Adriana Tapus

AbstractTelemanipulation in power stations commonly require robots first to open doors and then gain access to a new workspace. However, the opened doors can easily close by disturbances, interrupt the operations, and potentially lead to collision damages. Although existing telemanipulation is a highly efficient master–slave work pattern due to human-in-the-loop control, it is not trivial for a user to specify the optimal measures to guarantee safety. This paper investigates the safety-critical motion planning and control problem to balance robotic safety against manipulation performance during work emergencies. Based on a dynamic workspace released by door-closing, the interactions between the workspace and robot are analyzed using a partially observable Markov decision process, thereby making the balance mechanism executed as belief tree planning. To act the planning, apart from telemanipulation actions, we clarify other three safety-guaranteed actions: on guard, defense and escape for self-protection by estimating collision risk levels to trigger them. Besides, our experiments show that the proposed method is capable of determining multiple solutions for balancing robotic safety and work efficiency during telemanipulation tasks.


Sign in / Sign up

Export Citation Format

Share Document