Optimal control about multi-agent wealth exchange and decision-making competence

2022 ◽  
Vol 417 ◽  
pp. 126772
Author(s):  
Lingling Wang ◽  
Shaoyong Lai ◽  
Rongmei Sun
2021 ◽  
Author(s):  
Arthur Campbell

Abstract An important task for organizations is establishing truthful communication between parties with differing interests. This task is made particularly challenging when the accuracy of the information is poorly observed or not at all. In these settings, incentive contracts based on the accuracy of information will not be very effective. This paper considers an alternative mechanism that does not require any signal of the accuracy of any information communicated to provide incentives for truthful communication. Rather, an expert sacrifices future participation in decision-making to influence the current period’s decision in favour of their preferred project. This mechanism captures a notion often described as ‘political capital’ whereby an individual is able to achieve their own preferred decision in the current period at the expense of being able to exert influence in future decisions (‘spending political capital’). When the first-best is not possible in this setting, I show that experts hold more influence than under the first-best and that, in a multi-agent extension, a finite team size is optimal. Together these results suggest that a small number of individuals hold excessive influence in organizations.


Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 631
Author(s):  
Chunyang Hu

In this paper, deep reinforcement learning (DRL) and knowledge transfer are used to achieve the effective control of the learning agent for the confrontation in the multi-agent systems. Firstly, a multi-agent Deep Deterministic Policy Gradient (DDPG) algorithm with parameter sharing is proposed to achieve confrontation decision-making of multi-agent. In the process of training, the information of other agents is introduced to the critic network to improve the strategy of confrontation. The parameter sharing mechanism can reduce the loss of experience storage. In the DDPG algorithm, we use four neural networks to generate real-time action and Q-value function respectively and use a momentum mechanism to optimize the training process to accelerate the convergence rate for the neural network. Secondly, this paper introduces an auxiliary controller using a policy-based reinforcement learning (RL) method to achieve the assistant decision-making for the game agent. In addition, an effective reward function is used to help agents balance losses of enemies and our side. Furthermore, this paper also uses the knowledge transfer method to extend the learning model to more complex scenes and improve the generalization of the proposed confrontation model. Two confrontation decision-making experiments are designed to verify the effectiveness of the proposed method. In a small-scale task scenario, the trained agent can successfully learn to fight with the competitors and achieve a good winning rate. For large-scale confrontation scenarios, the knowledge transfer method can gradually improve the decision-making level of the learning agent.


Sign in / Sign up

Export Citation Format

Share Document