Modeling Eye-Gaze Behavior of Electric Wheelchair Drivers via Inverse Reinforcement Learning

Author(s):  
Yamato Maekawa ◽  
Naoki Akai ◽  
Takatsugu Hirayama ◽  
Luis Yoichi Morales ◽  
Daisuke Deguchi ◽  
...  
2017 ◽  
Vol 137 (4) ◽  
pp. 667-673
Author(s):  
Shinji Tomita ◽  
Fumiya Hamatsu ◽  
Tomoki Hamagami

Author(s):  
Ritesh Noothigattu ◽  
Djallel Bouneffouf ◽  
Nicholas Mattei ◽  
Rachita Chandra ◽  
Piyush Madan ◽  
...  

Autonomous cyber-physical agents play an increasingly large role in our lives. To ensure that they behave in ways aligned with the values of society, we must develop techniques that allow these agents to not only maximize their reward in an environment, but also to learn and follow the implicit constraints of society. We detail a novel approach that uses inverse reinforcement learning to learn a set of unspecified constraints from demonstrations and reinforcement learning to learn to maximize environmental rewards. A contextual bandit-based orchestrator then picks between the two policies: constraint-based and environment reward-based. The contextual bandit orchestrator allows the agent to mix policies in novel ways, taking the best actions from either a reward-maximizing or constrained policy. In addition, the orchestrator is transparent on which policy is being employed at each time step. We test our algorithms using Pac-Man and show that the agent is able to learn to act optimally, act within the demonstrated constraints, and mix these two functions in complex ways.


2021 ◽  
Author(s):  
Stav Belogolovsky ◽  
Philip Korsunsky ◽  
Shie Mannor ◽  
Chen Tessler ◽  
Tom Zahavy

AbstractWe consider the task of Inverse Reinforcement Learning in Contextual Markov Decision Processes (MDPs). In this setting, contexts, which define the reward and transition kernel, are sampled from a distribution. In addition, although the reward is a function of the context, it is not provided to the agent. Instead, the agent observes demonstrations from an optimal policy. The goal is to learn the reward mapping, such that the agent will act optimally even when encountering previously unseen contexts, also known as zero-shot transfer. We formulate this problem as a non-differential convex optimization problem and propose a novel algorithm to compute its subgradients. Based on this scheme, we analyze several methods both theoretically, where we compare the sample complexity and scalability, and empirically. Most importantly, we show both theoretically and empirically that our algorithms perform zero-shot transfer (generalize to new and unseen contexts). Specifically, we present empirical experiments in a dynamic treatment regime, where the goal is to learn a reward function which explains the behavior of expert physicians based on recorded data of them treating patients diagnosed with sepsis.


2021 ◽  
Author(s):  
Amarildo Likmeta ◽  
Alberto Maria Metelli ◽  
Giorgia Ramponi ◽  
Andrea Tirinzoni ◽  
Matteo Giuliani ◽  
...  

AbstractIn real-world applications, inferring the intentions of expert agents (e.g., human operators) can be fundamental to understand how possibly conflicting objectives are managed, helping to interpret the demonstrated behavior. In this paper, we discuss how inverse reinforcement learning (IRL) can be employed to retrieve the reward function implicitly optimized by expert agents acting in real applications. Scaling IRL to real-world cases has proved challenging as typically only a fixed dataset of demonstrations is available and further interactions with the environment are not allowed. For this reason, we resort to a class of truly batch model-free IRL algorithms and we present three application scenarios: (1) the high-level decision-making problem in the highway driving scenario, and (2) inferring the user preferences in a social network (Twitter), and (3) the management of the water release in the Como Lake. For each of these scenarios, we provide formalization, experiments and a discussion to interpret the obtained results.


2021 ◽  
Vol 15 (1) ◽  
Author(s):  
Yuko Ishizaki ◽  
Takahiro Higuchi ◽  
Yoshitoki Yanagimoto ◽  
Hodaka Kobayashi ◽  
Atsushi Noritake ◽  
...  

Abstract Background Children with autism spectrum disorder (ASD) may experience difficulty adapting to daily life in a preschool or school settings and are likely to develop psychosomatic symptoms. For a better understanding of the difficulties experienced daily by preschool children and adolescents with ASD, this study investigated differences in eye gaze behavior in the classroom environment between children with ASD and those with typical development (TD). Methods The study evaluated 30 children with ASD and 49 children with TD. Participants were presented with images of a human face and a classroom scene. While they gazed at specific regions of visual stimuli, eye tracking with an iView X system was used to evaluate and compare the duration of gaze time between the two groups. Results Compared with preschool children with TD, preschool children with ASD spent less time gazing at the eyes of the human face and the object at which the teacher pointed in the classroom image. Preschool children with TD who had no classroom experience tended to look at the object the teacher pointed at in the classroom image. Conclusion Children with ASD did not look at the human eyes in the facial image or the object pointed at in the classroom image, which may indicate their inability to analyze situations, understand instruction in a classroom, or act appropriately in a group. This suggests that this gaze behavior of children with ASD causes social maladaptation and psychosomatic symptoms. A therapeutic approach that focuses on joint attention is desirable for improving the ability of children with ASD to adapt to their social environment.


Sign in / Sign up

Export Citation Format

Share Document