Research on Collaborative Control Method of Manufacturing Process Based on Distributed Multi-Agent Cooperation

Author(s):  
Zhipeng Li ◽  
Xuesong Jiang ◽  
Shuaishuai Yao ◽  
Dongwang Li
2021 ◽  
Vol 11 (4) ◽  
pp. 1816
Author(s):  
Luyu Liu ◽  
Qianyuan Liu ◽  
Yong Song ◽  
Bao Pang ◽  
Xianfeng Yuan ◽  
...  

Collaborative control of a dual-arm robot refers to collision avoidance and working together to accomplish a task. To prevent the collision of two arms, the control strategy of a robot arm needs to avoid competition and to cooperate with the other one during motion planning. In this paper, a dual-arm deep deterministic policy gradient (DADDPG) algorithm is proposed based on deep reinforcement learning of multi-agent cooperation. Firstly, the construction method of a replay buffer in a hindsight experience replay algorithm is introduced. The modeling and training method of the multi-agent deep deterministic policy gradient algorithm is explained. Secondly, a control strategy is assigned to each robotic arm. The arms share their observations and actions. The dual-arm robot is trained based on a mechanism of “rewarding cooperation and punishing competition”. Finally, the effectiveness of the algorithm is verified in the Reach, Push, and Pick up simulation environment built in this study. The experiment results show that the robot trained by the DADDPG algorithm can achieve cooperative tasks. The algorithm can make the robots explore the action space autonomously and reduce the level of competition with each other. The collaborative robots have better adaptability to coordination tasks.


Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 8
Author(s):  
Gustavo Chica-Pedraza ◽  
Eduardo Mojica-Nava ◽  
Ernesto Cadena-Muñoz

Multi-Agent Systems (MAS) have been used to solve several optimization problems in control systems. MAS allow understanding the interactions between agents and the complexity of the system, thus generating functional models that are closer to reality. However, these approaches assume that information between agents is always available, which means the employment of a full-information model. Some tendencies have been growing in importance to tackle scenarios where information constraints are relevant issues. In this sense, game theory approaches appear as a useful technique that use a strategy concept to analyze the interactions of the agents and achieve the maximization of agent outcomes. In this paper, we propose a distributed control method of learning that allows analyzing the effect of the exploration concept in MAS. The dynamics obtained use Q-learning from reinforcement learning as a way to include the concept of exploration into the classic exploration-less Replicator Dynamics equation. Then, the Boltzmann distribution is used to introduce the Boltzmann-Based Distributed Replicator Dynamics as a tool for controlling agents behaviors. This distributed approach can be used in several engineering applications, where communications constraints between agents are considered. The behavior of the proposed method is analyzed using a smart grid application for validation purposes. Results show that despite the lack of full information of the system, by controlling some parameters of the method, it has similar behavior to the traditional centralized approaches.


2021 ◽  
pp. 107754632110340
Author(s):  
Jia Wu ◽  
Ning Liu ◽  
Wenyan Tang

This study investigates the tracking consensus problem for a class of unknown nonlinear multi-agent systems A novel data-driven protocol for this problem is proposed by using the model-free adaptive control method To obtain faster convergence speed, one-step-ahead desired signal is introduced to construct the novel protocol Here, switching communication topology is considered, which is not required to be strongly connected all the time Through rigorous analysis, sufficient conditions are given to guarantee that the tracking errors of all agents are convergent under the novel protocol Examples are given to validate the effectiveness of results derived in this article


2018 ◽  
Author(s):  
Yanbin Zheng ◽  
Guangfu Ma ◽  
Linlin Wang ◽  
Pengxue Xi

Sign in / Sign up

Export Citation Format

Share Document