Multi-agent Deception in Attack-Defense Stochastic Game

Author(s):  
Xueting Li ◽  
Sha Yi ◽  
Katia Sycara
Keyword(s):  
2011 ◽  
Vol 14 (02) ◽  
pp. 251-278 ◽  
Author(s):  
SAM DEVLIN ◽  
DANIEL KUDENKO ◽  
MAREK GRZEŚ

This paper investigates the impact of reward shaping in multi-agent reinforcement learning as a way to incorporate domain knowledge about good strategies. In theory, potential-based reward shaping does not alter the Nash Equilibria of a stochastic game, only the exploration of the shaped agent. We demonstrate empirically the performance of reward shaping in two problem domains within the context of RoboCup KeepAway by designing three reward shaping schemes, encouraging specific behaviour such as keeping a minimum distance from other players on the same team and taking on specific roles. The results illustrate that reward shaping with multiple, simultaneous learning agents can reduce the time needed to learn a suitable policy and can alter the final group performance.


2019 ◽  
Vol 66 ◽  
pp. 473-502 ◽  
Author(s):  
Xiaomin Lin ◽  
Stephen C. Adams ◽  
Peter A. Beling

This paper addresses the problem of multi-agent inverse reinforcement learning (MIRL) in a two-player general-sum stochastic game framework. Five variants of MIRL are considered: uCS-MIRL, advE-MIRL, cooE-MIRL, uCE-MIRL, and uNE-MIRL, each distinguished by its solution concept. Problem uCS-MIRL is a cooperative game in which the agents employ cooperative strategies that aim to maximize the total game value. In problem uCE-MIRL, agents are assumed to follow strategies that constitute a correlated equilibrium while maximizing total game value. Problem uNE-MIRL is similar to uCE-MIRL in total game value maximization, but it is assumed that the agents are playing a Nash equilibrium. Problems advE-MIRL and cooE-MIRL assume agents are playing an adversarial equilibrium and a coordination equilibrium, respectively. We propose novel approaches to address these five problems under the assumption that the game observer either knows or is able to accurately estimate the policies and solution concepts for players. For uCS-MIRL, we first develop a characteristic set of solutions ensuring that the observed bi-policy is a uCS and then apply a Bayesian inverse learning method. For uCE-MIRL, we develop a linear programming problem subject to constraints that define necessary and sufficient conditions for the observed policies to be correlated equilibria. The objective is to choose a solution that not only minimizes the total game value difference between the observed bi-policy and a local uCS, but also maximizes the scale of the solution. We apply a similar treatment to the problem of uNE-MIRL. The remaining two problems can be solved efficiently by taking advantage of solution uniqueness and setting up a convex optimization problem. Results are validated on various benchmark grid-world games.


Author(s):  
Friedrich Burkhard von der Osten ◽  
Michael Kirley ◽  
Tim Miller

The Theory of Mind provides a framework for an agent to predict the actions of adversaries by building an abstract model of their strategies using recursive nested beliefs. In this paper, we extend a recently introduced technique for opponent modeling based on Theory of Mind reasoning. Our extended multi-agent Theory of Mind model explicitly considers multiple opponents simultaneously. We introduce a stereotyping mechanism, which segments the agent population into sub-groups of agents with similar behavior. Here, sub-group profiles guide decision making in place of individual agent profiles. We evaluate our model using a multi-player stochastic game, which presents agents with the challenge of unknown adversaries in a partially-observable environment. Simulation results demonstrate that the model performs well under uncertainty and that stereotyping allows larger groups of agents to be modeled robustly. The findings strengthen results showing that Theory of Mind modeling is useful in many artificial intelligence applications.


Sign in / Sign up

Export Citation Format

Share Document