A Dynamic Solution Concept to Cooperative Games with Fuzzy Coalitions

Author(s):  
Surajit Borkotokey
Author(s):  
MILAN MAREŠ ◽  
MILAN VLACH

The theory of cooperative games with vague cooperation is based on modelling fuzzy coalitions as fuzzy subsets of the set of all players who participate in the coalitions with some part of their "power". Here, we suggest an alternative approach assuming that coalitions are formed by relatively compact groups of individual players each of which represents a specific common interest. Each individual player may participate in several such groups and, as their member, in several coalitions. Our aim is to show that such an alternative model of fuzzy coalitions, in spite of its seemingly higher complexity, offers an interesting more sophisticated reflection of the structure of vague cooperation and of relations being in its background.


2000 ◽  
Vol 02 (01) ◽  
pp. 47-65 ◽  
Author(s):  
JERZY A. FILAR ◽  
LEON A. PETROSJAN

We consider dynamic cooperative games in characteristic function form in the sense that the characteristic function evolves over time in accordance with a difference or differential equation that is influenced not only by the current ("instantaneous") characteristic function but also by the solution concept used to allocate the benefits of cooperation among the players. The latter solution concept can be any one of a number of now standard solution concepts of cooperative game theory but, for demonstration purposes, we focus on the core and the Shapley value. In the process, we introduce some new mechanisms by which players may regard the evolution of cooperative game over time and analyse them with respect to the goal of attaining time consistency either in discrete or in continuous time setting. In discrete time, we illustrate the phenomena that can arise when an allocation according to a given solution concept is used to adapt the values of coalitions at successive time points. In continuous time, we introduce the notion of an "instantaneous" game and its integration over time.


Author(s):  
Johann Bauer ◽  
Mark Broom ◽  
Eduardo Alonso

The multi-population replicator dynamics is a dynamic approach to coevolving populations and multi-player games and is related to Cross learning. In general, not every equilibrium is a Nash equilibrium of the underlying game, and the convergence is not guaranteed. In particular, no interior equilibrium can be asymptotically stable in the multi-population replicator dynamics, e.g. resulting in cyclic orbits around a single interior Nash equilibrium. We introduce a new notion of equilibria of replicator dynamics, called mutation limits, based on a naturally arising, simple form of mutation, which is invariant under the specific choice of mutation parameters. We prove the existence of mutation limits for a large class of games, and consider a particularly interesting subclass called attracting mutation limits. Attracting mutation limits are approximated in every (mutation-)perturbed replicator dynamics, hence they offer an approximate dynamic solution to the underlying game even if the original dynamic is not convergent. Thus, mutation stabilizes the system in certain cases and makes attracting mutation limits near attainable. Hence, attracting mutation limits are relevant as a dynamic solution concept of games. We observe that they have some similarity to Q-learning in multi-agent reinforcement learning. Attracting mutation limits do not exist in all games, however, raising the question of their characterization.


2006 ◽  
Vol 28 (4) ◽  
pp. 685-703 ◽  
Author(s):  
Yuan Ju ◽  
Peter Borm ◽  
Pieter Ruys

Author(s):  
Alfredo Garro

Game Theory (Von Neumann & Morgenstern, 1944) is a branch of applied mathematics and economics that studies situations (games) where self-interested interacting players act for maximizing their returns; therefore, the return of each player depends on his behaviour and on the behaviours of the other players. Game Theory, which plays an important role in the social and political sciences, has recently drawn attention in new academic fields which go from algorithmic mechanism design to cybernetics. However, a fundamental problem to solve for effectively applying Game Theory in real word applications is the definition of well-founded solution concepts of a game and the design of efficient algorithms for their computation. A widely accepted solution concept of a game in which any cooperation among the players must be selfenforcing (non-cooperative game) is represented by the Nash Equilibrium. In particular, a Nash Equilibrium is a set of strategies, one for each player of the game, such that no player can benefit by changing his strategy unilaterally, i.e. while the other players keep their strategies unchanged (Nash, 1951). The problem of computing Nash Equilibria in non-cooperative games is considered one of the most important open problem in Complexity Theory (Papadimitriou, 2001). Daskalakis, Goldbergy, and Papadimitriou (2005), showed that the problem of computing a Nash equilibrium in a game with four or more players is complete for the complexity class PPAD-Polynomial Parity Argument Directed version (Papadimitriou, 1991), moreover, Chen and Deng extended this result for 2-player games (Chen & Deng, 2005). However, even in the two players case, the best algorithm known has an exponential worst-case running time (Savani & von Stengel, 2004); furthermore, if the computation of equilibria with simple additional properties is required, the problem immediately becomes NP-hard (Bonifaci, Di Iorio, & Laura, 2005) (Conitzer & Sandholm, 2003) (Gilboa & Zemel, 1989) (Gottlob, Greco, & Scarcello, 2003). Motivated by these results, recent studies have dealt with the problem of efficiently computing Nash Equilibria by exploiting approaches based on the concepts of learning and evolution (Fudenberg & Levine, 1998) (Maynard Smith, 1982). In these approaches the Nash Equilibria of a game are not statically computed but are the result of the evolution of a system composed by agents playing the game. In particular, each agent after different rounds will learn to play a strategy that, under the hypothesis of agent’s rationality, will be one of the Nash equilibria of the game (Benaim & Hirsch, 1999) (Carmel & Markovitch, 1996). This article presents SALENE, a Multi-Agent System (MAS) for learning Nash Equilibria in noncooperative games, which is based on the above mentioned concepts.


Author(s):  
Daisuke Hatano ◽  
Yuichi Yoshida

In a cooperative game, the utility of a coalition of players is given by the characteristic function, and the goal is to find a stable value division of the total utility to the players. In real-world applications, however, multiple scenarios could exist, each of which determines a characteristic function, and which scenario is more important is unknown. To handle such situations, the notion of multi-scenario cooperative games and several solution concepts have been proposed. However, computing the value divisions in those solution concepts is intractable in general. To resolve this issue, we focus on supermodular two-scenario cooperative games in which the number of scenarios is two and the characteristic functions are supermodular and study the computational aspects of a major solution concept called the preference core. First, we show that we can compute the value division in the preference core of a supermodular two-scenario game in polynomial time. Then, we reveal the relations among preference cores with different parameters. Finally, we provide more efficient algorithms for deciding the non-emptiness of the preference core for several specific supermodular two-scenario cooperative games such as the airport game, multicast tree game, and a special case of the generalized induced subgraph game.


Sign in / Sign up

Export Citation Format

Share Document