scholarly journals Cooperation Enforcement and Collusion Resistance in Repeated Public Goods Games

Author(s):  
Kai Li ◽  
Dong Hao

Enforcing cooperation among substantial agents is one of the main objectives for multi-agent systems. However, due to the existence of inherent social dilemmas in many scenarios, the free-rider problem may arise during agents’ long-run interactions and things become even severer when self-interested agents work in collusion with each other to get extra benefits. It is commonly accepted that in such social dilemmas, there exists no simple strategy for an agent whereby she can simultaneously manipulate on the utility of each of her opponents and further promote mutual cooperation among all agents. Here, we show that such strategies do exist. Under the conventional repeated public goods game, we novelly identify them and find that, when confronted with such strategies, a single opponent can maximize his utility only via global cooperation and any colluding alliance cannot get the upper hand. Since a full cooperation is individually optimal for any single opponent, a stable cooperation among all players can be achieved. Moreover, we experimentally show that these strategies can still promote cooperation even when the opponents are both self-learning and collusive.

2008 ◽  
Vol 33 ◽  
pp. 551-574 ◽  
Author(s):  
S. De Jong ◽  
S. Uyttendaele ◽  
K. Tuyls

It is well-known that acting in an individually rational manner, according to the principles of classical game theory, may lead to sub-optimal solutions in a class of problems named social dilemmas. In contrast, humans generally do not have much difficulty with social dilemmas, as they are able to balance personal benefit and group benefit. As agents in multi-agent systems are regularly confronted with social dilemmas, for instance in tasks such as resource allocation, these agents may benefit from the inclusion of mechanisms thought to facilitate human fairness. Although many of such mechanisms have already been implemented in a multi-agent systems context, their application is usually limited to rather abstract social dilemmas with a discrete set of available strategies (usually two). Given that many real-world examples of social dilemmas are actually continuous in nature, we extend this previous work to more general dilemmas, in which agents operate in a continuous strategy space. The social dilemma under study here is the well-known Ultimatum Game, in which an optimal solution is achieved if agents agree on a common strategy. We investigate whether a scale-free interaction network facilitates agents to reach agreement, especially in the presence of fixed-strategy agents that represent a desired (e.g. human) outcome. Moreover, we study the influence of rewiring in the interaction network. The agents are equipped with continuous-action learning automata and play a large number of random pairwise games in order to establish a common strategy. From our experiments, we may conclude that results obtained in discrete-strategy games can be generalized to continuous-strategy games to a certain extent: a scale-free interaction network structure allows agents to achieve agreement on a common strategy, and rewiring in the interaction network greatly enhances the agents' ability to reach agreement. However, it also becomes clear that some alternative mechanisms, such as reputation and volunteering, have many subtleties involved and do not have convincing beneficial effects in the continuous case.


2017 ◽  
Author(s):  
Bryce Morsky ◽  
Dervis Can Vural

AbstractMuch research has focused on the deleterious effects of free-riding in public goods games, and a variety of mechanisms that suppresses cheating behavior. Here we argue that under certain conditions cheating behavior can be beneficial to the population. In a public goods game, cheaters do not pay for the cost of the public goods, yet they receive the benefit. Although this free-riding harms the entire population in the long run, the success of cheaters may aid the population when there is a common enemy that antagonizes both cooperators and cheaters. Here we study models in which an immune system antagonizes a cooperating pathogen. We investigate three population dynamics models, and determine under what conditions the presence of cheaters help defeat the immune system. The mechanism of action is that a polymorphism of cheaters and altruists optimizes the average growth rate. Our results give support for a possible synergy between cooperators and cheaters in ecological public goods games.


Author(s):  
Manfred Milinski

In a social dilemma the interest of the individual is in conflict with that of the group. However, individuals will help their group, if they gain in reputation that pays off later. Future partners can observe cooperative or defective behavior or, more likely, hear about it through gossip. In Indirect Reciprocity games, Public Goods games, and Trust games gossip may be the only information a participant can use to decide whether she can trust her interaction partner and give away her holdings hoping for reciprocation. Even the mere potential for gossip can increase trust and trustworthiness thus promoting cooperation. Gossip is a cheap mechanism for disciplining free riders, potentially even extortioners. The temptation for manipulative gossip defines the gossiper’s dilemma. Psychological adaptations for assessing gossip veracity help to avoid being manipulated. The danger of false gossip is reduced when multiple gossips exist.


2015 ◽  
Vol 282 (1798) ◽  
pp. 20141994 ◽  
Author(s):  
Miguel dos Santos

Cooperation in joint enterprises can easily break down when self-interests are in conflict with collective benefits, causing a tragedy of the commons. In such social dilemmas, the possibility for contributors to invest in a common pool-rewards fund, which will be shared exclusively among contributors, can be powerful for averting the tragedy, as long as the second-order dilemma (i.e. withdrawing contribution to reward funds) can be overcome (e.g. with second-order sanctions). However, the present paper reveals the vulnerability of such pool-rewarding mechanisms to the presence of reward funds raised by defectors and shared among them (i.e. anti-social rewarding), as it causes a cooperation breakdown, even when second-order sanctions are possible. I demonstrate that escaping this social trap requires the additional condition that coalitions of defectors fare poorly compared with pro-socials, with either (i) better rewarding abilities for the latter or (ii) reward funds that are contingent upon the public good produced beforehand, allowing groups of contributors to invest more in reward funds than groups of defectors. These results suggest that the establishment of cooperation through a collective positive incentive mechanism is highly vulnerable to anti-social rewarding and requires additional countermeasures to act in combination with second-order sanctions.


2020 ◽  
Vol 34 (05) ◽  
pp. 7047-7054 ◽  
Author(s):  
Nicolas Anastassacos ◽  
Stephen Hailes ◽  
Mirco Musolesi

Social dilemmas have been widely studied to explain how humans are able to cooperate in society. Considerable effort has been invested in designing artificial agents for social dilemmas that incorporate explicit agent motivations that are chosen to favor coordinated or cooperative responses. The prevalence of this general approach points towards the importance of achieving an understanding of both an agent's internal design and external environment dynamics that facilitate cooperative behavior. In this paper, we investigate how partner selection can promote cooperative behavior between agents who are trained to maximize a purely selfish objective function. Our experiments reveal that agents trained with this dynamic learn a strategy that retaliates against defectors while promoting cooperation with other agents resulting in a prosocial society.


Sign in / Sign up

Export Citation Format

Share Document