Reflective Oracles: A Foundation for Game Theory in Artificial Intelligence

Author(s):  
Benja Fallenstein ◽  
Jessica Taylor ◽  
Paul F. Christiano
Author(s):  
Kaisheng Wu ◽  
Liangda Fang ◽  
Liping Xiong ◽  
Zhao-Rong Lai ◽  
Yong Qiao ◽  
...  

Strategy representation and reasoning has recently received much attention in artificial intelligence. Impartial combinatorial games (ICGs) are a type of elementary and fundamental games in game theory. One of the challenging problems of ICGs is to construct winning strategies, particularly, generalized winning strategies for possibly infinitely many instances of ICGs. In this paper, we investigate synthesizing generalized winning strategies for ICGs. To this end, we first propose a logical framework to formalize ICGs based on the linear integer arithmetic fragment of numeric part of PDDL. We then propose an approach to generating the winning formula that exactly captures the states in which the player can force to win. Furthermore, we compute winning strategies for ICGs based on the winning formula. Experimental results on several games demonstrate the effectiveness of our approach.


AI Magazine ◽  
2012 ◽  
Vol 33 (3) ◽  
pp. 109
Author(s):  
Harith Alani ◽  
Bo An ◽  
Manish Jain ◽  
Takashi Kido ◽  
George Konidaris ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, was pleased to present the 2012 Spring Symposium Series, held Monday through Wednesday, March 26–28, 2012 at Stanford University, Stanford, California USA. The six symposia held were AI, The Fundamental Social Aggregation Challenge (cochaired by W. F. Lawless, Don Sofge, Mark Klein, and Laurent Chaudron); Designing Intelligent Robots (cochaired by George Konidaris, Byron Boots, Stephen Hart, Todd Hester, Sarah Osentoski, and David Wingate); Game Theory for Security, Sustainability, and Health (cochaired by Bo An and Manish Jain); Intelligent Web Services Meet Social Computing (cochaired by Tomas Vitvar, Harith Alani, and David Martin); Self-Tracking and Collective Intelligence for Personal Wellness (cochaired by Takashi Kido and Keiki Takadama); and Wisdom of the Crowd (cochaired by Caroline Pantofaru, Sonia Chernova, and Alex Sorokin). The papers of the six symposia were published in the AAAI technical report series.


AI Magazine ◽  
2019 ◽  
Vol 40 (1) ◽  
pp. 49-62 ◽  
Author(s):  
Sunny Fugate ◽  
Kimberly Ferguson-Walter

Traditional cyber security techniques have led to an asymmetric disadvantage for defenders. The defender must detect all possible threats at all times from all attackers and defend all systems against all possible exploitation. In contrast, an attacker needs only to find a single path to the defender’s critical information. In this article, we discuss how this asymmetry can be rebalanced using cyber deception to change the attacker’s perception of the network environment, and lead attackers to false beliefs about which systems contain critical information or are critical to a defender’s computing infrastructure. We introduce game theory concepts and models to represent and reason over the use of cyber deception by the defender and the effect it has on attacker perception. Finally, we discuss techniques for combining artificial intelligence algorithms with game theory models to estimate hidden states of the attacker using feedback through payoffs to learn how best to defend the system using cyber deception. It is our opinion that adaptive cyber deception is a necessary component of future information systems and networks. The techniques we present can simultaneously decrease the risks and impacts suffered by defenders and dramatically increase the costs and risks of detection for attackers. Such techniques are likely to play a pivotal role in defending national and international security concerns.


Author(s):  
Lorenzo Barberis Canonico ◽  
Christopher Flathmann ◽  
Nathan McNeese

There is an ever-growing literature on the power of prediction markets to harness “the wisdom of the crowd” from large groups of people. However, traditional prediction markets are not designed in a human-centered way, often restricting their own potential. This creates the opportunity to implement a cognitive science perspective on how to enhance the collective intelligence of the participants. Thus, we propose a new model for prediction markets that integrates human factors, cognitive science, game theory and machine learning to maximize collective intelligence. We do this by first identifying the connections between prediction markets and collective intelligence, to then use human factors techniques to analyze our design, culminating in the practical ways with which our design enables artificial intelligence to complement human intelligence.


2020 ◽  
Author(s):  
D.V. SHCHerbakov ◽  
V.A. Kozlov

2018 ◽  
Vol 2 (4) ◽  
pp. 63 ◽  
Author(s):  
Hanno Hildmann

The context of the work presented in this article is the assessment and automated evaluation of human behaviour. To facilitate this, a formalism is presented which is unambiguous as well as such that it can be implemented and interpreted in an automated manner. In the greater scheme of things, comparable behaviour evaluation requires comparable assessment scenarios and, to this end, computer games are considered as controllable and abstract environments. Within this context, a model for behavioural AI is presented which was designed around the objectives of: (a) being able to play rationally; (b) adhering to formally stated behaviour preferences; and (c) ensuring that very specific circumstances can be forced to arise within a game. The presented work is based on established models from the field of behavioural psychology, formal logic as well as approaches from game theory and related fields. The suggested model for behavioural AI has been used to implement and test a game, as well as AI players that exhibit specific behavioural preferences. The overall aim of this article is to enable the readers to design their own AI implementation, using the formalisms and models they prefer and to a level of complexity they desire.


2020 ◽  
Vol 4 (27) ◽  
Author(s):  
David L. Dowe ◽  
Nader Chmait

Author(s):  
Andrew Briggs ◽  
Hans Halvorson ◽  
Andrew Steane

The chapter poses questions about personhood, and explores them through some philosophy, extended examples from machine learning and artificial intelligence, and religious reflection. Parfit’s Reasons and Persons and the use of game theory is explored. The question of human free will is framed as centring on the issue of responsibility. Recent advances in AI, especially learning systems such as AlphaGo, are presented. These do not settle any fundamental questions about the nature of consciousness, but they do encourage us to ask what our attitude to autonomous machines should be. The discussion then turns to human evolutionary development, and to what makes humans distinctive, touching on scientific, philosophical, and theological issues. Some aspects of philosophy and theology can be productively approached through storytelling; this fruitful method is seen at work in the Bible. To be responsible lies at the heart of what it means to be human.


Sign in / Sign up

Export Citation Format

Share Document