EVOLUTIONARY GAMES IN MULTI-AGENT SYSTEMS OF WEIGHTED SOCIAL NETWORKS

2009 ◽  
Vol 20 (05) ◽  
pp. 701-710 ◽  
Author(s):  
WEN-BO DU ◽  
XIAN-BIN CAO ◽  
HAO-RAN ZHENG ◽  
HONG ZHOU ◽  
MAO-BIN HU

Much empirical evidence has shown realistic networks are weighted. Compared with those on unweighted networks, the dynamics on weighted network often exhibit distinctly different phenomena. In this paper, we investigate the evolutionary game dynamics (prisoner's dilemma game and snowdrift game) on a weighted social network consisted of rational agents and focus on the evolution of cooperation in the system. Simulation results show that the cooperation level is strongly affected by the weighted nature of the network. Moreover, the variation of time series has also been investigated. Our work may be helpful in understanding the cooperative behavior in the social systems.

Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 485 ◽  
Author(s):  
Sayantan Nag Chowdhury ◽  
Srilena Kundu ◽  
Maja Duh ◽  
Matjaž Perc ◽  
Dibakar Ghosh

Evolutionary game theory in the realm of network science appeals to a lot of research communities, as it constitutes a popular theoretical framework for studying the evolution of cooperation in social dilemmas. Recent research has shown that cooperation is markedly more resistant in interdependent networks, where traditional network reciprocity can be further enhanced due to various forms of interdependence between different network layers. However, the role of mobility in interdependent networks is yet to gain its well-deserved attention. Here we consider an interdependent network model, where individuals in each layer follow different evolutionary games, and where each player is considered as a mobile agent that can move locally inside its own layer to improve its fitness. Probabilistically, we also consider an imitation possibility from a neighbor on the other layer. We show that, by considering migration and stochastic imitation, further fascinating gateways to cooperation on interdependent networks can be observed. Notably, cooperation can be promoted on both layers, even if cooperation without interdependence would be improbable on one of the layers due to adverse conditions. Our results provide a rationale for engineering better social systems at the interface of networks and human decision making under testing dilemmas.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2642
Author(s):  
Godwin Asaamoning ◽  
Paulo Mendes ◽  
Denis Rosário ◽  
Eduardo Cerqueira

The study of multi-agent systems such as drone swarms has been intensified due to their cooperative behavior. Nonetheless, automating the control of a swarm is challenging as each drone operates under fluctuating wireless, networking and environment constraints. To tackle these challenges, we consider drone swarms as Networked Control Systems (NCS), where the control of the overall system is done enclosed within a wireless communication network. This is based on a tight interconnection between the networking and computational systems, aiming to efficiently support the basic control functionality, namely data collection and exchanging, decision-making, and the distribution of actuation commands. Based on a literature analysis, we do not find revision papers about design of drone swarms as NCS. In this review, we introduce an overview of how to develop self-organized drone swarms as NCS via the integration of a networking system and a computational system. In this sense, we describe the properties of the proposed components of a drone swarm as an NCS in terms of networking and computational systems. We also analyze their integration to increase the performance of a drone swarm. Finally, we identify a potential design choice, and a set of open research challenges for the integration of network and computing in a drone swarm as an NCS.


Author(s):  
Kun Zhang ◽  
◽  
Yoichiro Maeda ◽  
Yasutake Takahashi ◽  

Research on multi-agent systems, in which autonomous agents are able to learn cooperative behavior, has been the subject of rising expectations in recent years. We have aimed at the group behavior generation of the multi-agents who have high levels of autonomous learning ability, like that of human beings, through social interaction between agents to acquire cooperative behavior. The sharing of environment states can improve cooperative ability, and the changing state of the environment in the information shared by agents will improve agents’ cooperative ability. On this basis, we use reward redistribution among agents to reinforce group behavior, and we propose a method of constructing a multi-agent system with an autonomous group creation ability. This is able to strengthen the cooperative behavior of the group as social agents.


Author(s):  
Katia Sycara ◽  
Paul Scerri ◽  
Anton Chechetka

In this chapter, we explore the use of evolutionary game theory (EGT) (Weibull, 1995; Taylor & Jonker, 1978; Nowak & May, 1993) to model the dynamics of adaptive opponent strategies for large population of players. In particular, we explore effects of information propagation through social networks in Evolutionary Games. The key underlying phenomenon that the information diffusion aims to capture is that reasoning about the experiences of acquaintances can dramatically impact the dynamics of a society. We present experimental results from agent-based simulations that show the impact of diffusion through social networks on the player strategies of an evolutionary game and the sensitivity of the dynamics to features of the social network.


Synthese ◽  
2004 ◽  
Vol 139 (2) ◽  
pp. 297-330 ◽  
Author(s):  
Karl Tuyls ◽  
Ann Nowe ◽  
Tom Lenaerts ◽  
Bernard Manderick

Author(s):  
Johann Bauer ◽  
Mark Broom ◽  
Eduardo Alonso

The multi-population replicator dynamics is a dynamic approach to coevolving populations and multi-player games and is related to Cross learning. In general, not every equilibrium is a Nash equilibrium of the underlying game, and the convergence is not guaranteed. In particular, no interior equilibrium can be asymptotically stable in the multi-population replicator dynamics, e.g. resulting in cyclic orbits around a single interior Nash equilibrium. We introduce a new notion of equilibria of replicator dynamics, called mutation limits, based on a naturally arising, simple form of mutation, which is invariant under the specific choice of mutation parameters. We prove the existence of mutation limits for a large class of games, and consider a particularly interesting subclass called attracting mutation limits. Attracting mutation limits are approximated in every (mutation-)perturbed replicator dynamics, hence they offer an approximate dynamic solution to the underlying game even if the original dynamic is not convergent. Thus, mutation stabilizes the system in certain cases and makes attracting mutation limits near attainable. Hence, attracting mutation limits are relevant as a dynamic solution concept of games. We observe that they have some similarity to Q-learning in multi-agent reinforcement learning. Attracting mutation limits do not exist in all games, however, raising the question of their characterization.


Sign in / Sign up

Export Citation Format

Share Document