A formal methods approach to interpretable reinforcement learning for robotic planning

2019 ◽  
Vol 4 (37) ◽  
pp. eaay6276 ◽  
Author(s):  
Xiao Li ◽  
Zachary Serlin ◽  
Guang Yang ◽  
Calin Belta

Growing interest in reinforcement learning approaches to robotic planning and control raises concerns of predictability and safety of robot behaviors realized solely through learned control policies. In addition, formally defining reward functions for complex tasks is challenging, and faulty rewards are prone to exploitation by the learning agent. Here, we propose a formal methods approach to reinforcement learning that (i) provides a formal specification language that integrates high-level, rich, task specifications with a priori, domain-specific knowledge; (ii) makes the reward generation process easily interpretable; (iii) guides the policy generation process according to the specification; and (iv) guarantees the satisfaction of the (critical) safety component of the specification. The main ingredients of our computational framework are a predicate temporal logic specifically tailored for robotic tasks and an automaton-guided, safe reinforcement learning algorithm based on control barrier functions. Although the proposed framework is quite general, we motivate it and illustrate it experimentally for a robotic cooking task, in which two manipulators worked together to make hot dogs.

Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1668
Author(s):  
Yuxiang Sun ◽  
Bo Yuan ◽  
Tao Zhang ◽  
Bojian Tang ◽  
Wanwen Zheng ◽  
...  

The reinforcement learning problem of complex action control in a multi-player wargame has been a hot research topic in recent years. In this paper, a game system based on turn-based confrontation is designed and implemented with state-of-the-art deep reinforcement learning models. Specifically, we first design a Q-learning algorithm to achieve intelligent decision-making, which is based on the DQN (Deep Q Network) to model complex game behaviors. Then, an a priori knowledge-based algorithm PK-DQN (Prior Knowledge-Deep Q Network) is introduced to improve the DQN algorithm, which accelerates the convergence speed and stability of the algorithm. The experiments demonstrate the correctness of the PK-DQN algorithm, it is validated, and its performance surpasses the conventional DQN algorithm. Furthermore, the PK-DQN algorithm shows effectiveness in defeating the high level of rule-based opponents, which provides promising results for the exploration of the field of smart chess and intelligent game deduction.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4546
Author(s):  
Weiwei Zhao ◽  
Hairong Chu ◽  
Xikui Miao ◽  
Lihong Guo ◽  
Honghai Shen ◽  
...  

Multiple unmanned aerial vehicle (UAV) collaboration has great potential. To increase the intelligence and environmental adaptability of multi-UAV control, we study the application of deep reinforcement learning algorithms in the field of multi-UAV cooperative control. Aiming at the problem of a non-stationary environment caused by the change of learning agent strategy in reinforcement learning in a multi-agent environment, the paper presents an improved multiagent reinforcement learning algorithm—the multiagent joint proximal policy optimization (MAJPPO) algorithm with the centralized learning and decentralized execution. This algorithm uses the moving window averaging method to make each agent obtain a centralized state value function, so that the agents can achieve better collaboration. The improved algorithm enhances the collaboration and increases the sum of reward values obtained by the multiagent system. To evaluate the performance of the algorithm, we use the MAJPPO algorithm to complete the task of multi-UAV formation and the crossing of multiple-obstacle environments. To simplify the control complexity of the UAV, we use the six-degree of freedom and 12-state equations of the dynamics model of the UAV with an attitude control loop. The experimental results show that the MAJPPO algorithm has better performance and better environmental adaptability.


2014 ◽  
Vol 136 (09) ◽  
pp. 36-41
Author(s):  
Krishnanand N. Kaipa ◽  
Joshua D. Langsfeld ◽  
Satyandra K. Gupta

This article elaborates the concept of programming a robot by showing it how to do the job. This is often called “learning from demonstrations” or “imitation learning.” Labs at several institutions – for example, the Swiss Federal Institute of Technology at Lausanne, the University of Maryland, Massachusetts Institute of Technology, and Worcester Polytechnic Institute – are experimenting with technology that may one day make imitation learning common for machines. The underlying idea of this approach is to allow an agent to acquire the necessary details of how to perform a task by observing another agent (who already has the relevant expertise) perform the same task. Usually, the learning agent is a robot and the teaching agent is a human. Often, the goal of imitation learning approaches is to extract some high-level details about how to perform the task from recorded demonstrations. Research into imitation learning has achieved some impressive results ranging from training unmanned helicopters to perform complex maneuvers to teaching robots general-purpose manipulation tasks.


2017 ◽  
Vol 1 (1) ◽  
pp. 21-42 ◽  
Author(s):  
Anestis Fachantidis ◽  
Matthew Taylor ◽  
Ioannis Vlahavas

In this article, we study the transfer learning model of action advice under a budget. We focus on reinforcement learning teachers providing action advice to heterogeneous students playing the game of Pac-Man under a limited advice budget. First, we examine several critical factors affecting advice quality in this setting, such as the average performance of the teacher, its variance and the importance of reward discounting in advising. The experiments show that the best performers are not always the best teachers and reveal the non-trivial importance of the coefficient of variation (CV) as a statistic for choosing policies that generate advice. The CV statistic relates variance to the corresponding mean. Second, the article studies policy learning for distributing advice under a budget. Whereas most methods in the relevant literature rely on heuristics for advice distribution, we formulate the problem as a learning one and propose a novel reinforcement learning algorithm capable of learning when to advise or not. The proposed algorithm is able to advise even when it does not have knowledge of the student’s intended action and needs significantly less training time compared to previous learning approaches. Finally, in this article, we argue that learning to advise under a budget is an instance of a more generic learning problem: Constrained Exploitation Reinforcement Learning.


2020 ◽  
Vol 34 (09) ◽  
pp. 13659-13662
Author(s):  
Giuseppe De Giacomo ◽  
Luca Iocchi ◽  
Marco Favorito ◽  
Fabio Patrizi

In this work we have investigated the concept of “restraining bolt”, inspired by Science Fiction. We have two distinct sets of features extracted from the world, one by the agent and one by the authority imposing some restraining specifications on the behaviour of the agent (the “restraining bolt”). The two sets of features and, hence the model of the world attainable from them, are apparently unrelated since of interest to independent parties. However they both account for (aspects of) the same world. We have considered the case in which the agent is a reinforcement learning agent on a set of low-level (subsymbolic) features, while the restraining bolt is specified logically using linear time logic on finite traces f/f over a set of high-level symbolic features. We show formally, and illustrate with examples, that, under general circumstances, the agent can learn while shaping its goals to suitably conform (as much as possible) to the restraining bolt specifications.1


2020 ◽  
Vol 34 (03) ◽  
pp. 2561-2568
Author(s):  
Morgane Ayle ◽  
Jimmy Tekli ◽  
Julia El-Zini ◽  
Boulos El-Asmar ◽  
Mariette Awad

Research has shown that deep neural networks are able to help and assist human workers throughout the industrial sector via different computer vision applications. However, such data-driven learning approaches require a very large number of labeled training images in order to generalize well and achieve high accuracies that meet industry standards. Gathering and labeling large amounts of images is both expensive and time consuming, specifically for industrial use-cases. In this work, we introduce BAR (Bounding-box Automated Refinement), a reinforcement learning agent that learns to correct inaccurate bounding-boxes that are weakly generated by certain detection methods, or wrongly annotated by a human, using either an offline training method with Deep Reinforcement Learning (BAR-DRL), or an online one using Contextual Bandits (BAR-CB). Our agent limits the human intervention to correcting or verifying a subset of bounding-boxes instead of re-drawing new ones. Results on a car industry-related dataset and on the PASCAL VOC dataset show a consistent increase of up to 0.28 in the Intersection-over-Union of bounding-boxes with their desired ground-truths, while saving 30%-82% of human intervention time in either correcting or re-drawing inaccurate proposals.


Author(s):  
Hamid R. Tizhoosh ◽  

Reinforcement learning is a machine intelligence scheme for learning in highly dynamic, probabilistic environments. By interaction with the environment, reinforcement agents learn optimal control policies, especially in the absence of a priori knowledge and/or a sufficiently large amount of training data. Despite its advantages, however, reinforcement learning suffers from a major drawback - high calculation cost because convergence to an optimal solution usually requires that all states be visited frequently to ensure that policy is reliable. This is not always possible, however, due to the complex, high-dimensional state space in many applications. This paper introduces opposition-based reinforcement learning, inspired by opposition-based learning, to speed up convergence. Considering opposite actions simultaneously enables individual states to be updated more than once shortening exploration and expediting convergence. Three versions of Q-learning algorithm will be given as examples. Experimental results for the grid world problem of different sizes demonstrate the superior performance of the proposed approach.


Author(s):  
Xiaoxiao Guo ◽  
Shiyu Chang ◽  
Mo Yu ◽  
Gerald Tesauro ◽  
Murray Campbell

Existing imitation learning approaches often require that the complete demonstration data, including sequences of actions and states, are available. In this paper, we consider a more realistic and difficult scenario where a reinforcement learning agent only has access to the state sequences of an expert, while the expert actions are unobserved. We propose a novel tensor-based model to infer the unobserved actions of the expert state sequences. The policy of the agent is then optimized via a hybrid objective combining reinforcement learning and imitation learning. We evaluated our hybrid approach on an illustrative domain and Atari games. The empirical results show that (1) the agents are able to leverage state expert sequences to learn faster than pure reinforcement learning baselines, (2) our tensor-based action inference model is advantageous compared to standard deep neural networks in inferring expert actions, and (3) the hybrid policy optimization objective is robust against noise in expert state sequences.


Author(s):  
Joshua Lye ◽  
Alisa Andrasek

This paper investigates the application of machine learning for the simulation of larger architectural aggregations formed through the recombination of discrete components. This is primarily explored through establishing hardcoded assembly and connection logics which are used to form the framework of architectural fitness conditions for machine learning models. The key machine learning models researched are a combination of the deep reinforcement learning algorithm proximal policy optimization (PPO) and Generative Adversarial Imitation Learning (GAIL) in the Unity Machine Learning Agent asset toolkit. The goal of applying these machine learning models is to train the agent behaviours (discrete components) to learn specific logics of connection. In order to achieve assembled architectural `states that allow for spatial habitation through the process of simulation.


Cloud computing becomes the basic alternative platform for the most users application in the recent years. The complexity increasing in cloud environment due to the continuous development of resources and applications needs a concentrated integrated fault tolerance approach to provide the quality of service. Focusing on reliability enhancement in an environment with dynamic changes such as cloud environment, we developed a multi-agent scheduler using Reinforcement Learning (RL) algorithm and Neural Fitted Q (NFQ) to effectively schedule the user requests. Our approach considers the queue buffer size for each resource by implementing the queue theory to design a queue model in a way that each scheduler agent has its own queue which receives the user requests from the global queue. A central learning agent responsible of learning the output of the scheduler agents and direct those scheduler agents through the feedback claimed from the previous step. The dynamicity problem in cloud environment is managed in our system by employing neural network which supports the reinforcement learning algorithm through a specified function. The numerical result demonstrated an efficiency of our proposed approach and enhanced the reliability


Sign in / Sign up

Export Citation Format

Share Document