scholarly journals Pinning Decision in Interconnected Systems with Communication Disruptions under Multi-Agent Distributed Control Topology

2021 ◽  
Vol 2 (1) ◽  
pp. 18-36
Author(s):  
Samson S. Yu ◽  
Tat Kei Chau

In this study, we propose a decision-making strategy for pinning-based distributed multi-agent (PDMA) automatic generation control (AGC) in islanded microgrids against stochastic communication disruptions. The target microgrid is construed as a cyber-physical system, wherein the physical microgrid is modeled as an inverter-interfaced autonomous grid with detailed system dynamic formulation, and the communication network topology is regarded as a cyber-system independent of its physical connection. The primal goal of the proposed method is to decide the minimum number of generators to be pinned and their identities amongst all distributed generators (DGs). The pinning-decisions are made based on complex network theories using the genetic algorithm (GA), for the purpose of synchronizing and regulating the frequencies and voltages of all generator bus-bars in a PDMA control structure, i.e., without resorting to a central AGC agent. Thereafter, the mapping of cyber-system topology and the pinning decision is constructed using deep-learning (DL) technique, so that the pinning-decision can be made nearly instantly upon detecting a new cyber-system topology after stochastic communication disruptions. The proposed decision-making approach is verified using a 10-generator, 38-bus microgrid through time-domain simulation for transient stability analysis. Simulations show that the proposed pinning decision making method can achieve robust frequency control with minimum number of active communication channels.

2021 ◽  
Author(s):  
Arthur Campbell

Abstract An important task for organizations is establishing truthful communication between parties with differing interests. This task is made particularly challenging when the accuracy of the information is poorly observed or not at all. In these settings, incentive contracts based on the accuracy of information will not be very effective. This paper considers an alternative mechanism that does not require any signal of the accuracy of any information communicated to provide incentives for truthful communication. Rather, an expert sacrifices future participation in decision-making to influence the current period’s decision in favour of their preferred project. This mechanism captures a notion often described as ‘political capital’ whereby an individual is able to achieve their own preferred decision in the current period at the expense of being able to exert influence in future decisions (‘spending political capital’). When the first-best is not possible in this setting, I show that experts hold more influence than under the first-best and that, in a multi-agent extension, a finite team size is optimal. Together these results suggest that a small number of individuals hold excessive influence in organizations.


Author(s):  
Abdullahi Bala Kunya ◽  
Mehmet Argin ◽  
Yusuf Jibril ◽  
Yusuf Abubakar Shaaban

Abstract Background Automatic generation control (AGC) of multi-area interconnected power system (IPS) is often designed with negligible cross-coupling between the load frequency control (LFC) and automatic voltage regulation (AVR) loops. This is because the AVR loop is considerably faster than that of LFC. However, with the introduction of slow optimal control action on the AVR, positive damping effect can be achieved on the LFC loop thereby improving the frequency control. In this paper, LFC synchronized with AVR in three-area IPS is proposed. Model predictive controller (MPC) configured in a dense distributed pattern, due to its online set-point tacking is used as the supplementary controller. The dynamics of the IPS subjected to multi-area step and random load disturbances are studied. The efficacy of the developed scheme is ascertained by simulating the disturbed system in MATLAB/Simulink. Results Based on the comparative analysis on the system responses, it is established that by cross-coupling the LFC loop with AVR, reductions of 66.45% and 59.09% in the frequency and tie-line power maximum deviations respectively are observed, while the respective settling times are found to be reduced by 29.68% and 22.77% when compared with the uncoordinated control scheme. In addition, the standard deviation and variance of the integral time absolute error of the system’s responses have reduced by 23.21% and 20.83% respectively compared to those obtained in a similar study. Conclusions The reduction in the maximum deviations and settling times in the system states indicates that introducing the voltage control via AVR loop has improved the frequency control significantly. While the lower standard deviation and variance of the integral time absolute error signify improvement in the robustness of the developed algorithm. However, this improvement is at the detriment of the controller size and computational complexity. In the uncoordinated control scheme, the control vector is one-dimensional, while in the coordinated scheme, the control vector is two-dimensional for each CA.


Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 631
Author(s):  
Chunyang Hu

In this paper, deep reinforcement learning (DRL) and knowledge transfer are used to achieve the effective control of the learning agent for the confrontation in the multi-agent systems. Firstly, a multi-agent Deep Deterministic Policy Gradient (DDPG) algorithm with parameter sharing is proposed to achieve confrontation decision-making of multi-agent. In the process of training, the information of other agents is introduced to the critic network to improve the strategy of confrontation. The parameter sharing mechanism can reduce the loss of experience storage. In the DDPG algorithm, we use four neural networks to generate real-time action and Q-value function respectively and use a momentum mechanism to optimize the training process to accelerate the convergence rate for the neural network. Secondly, this paper introduces an auxiliary controller using a policy-based reinforcement learning (RL) method to achieve the assistant decision-making for the game agent. In addition, an effective reward function is used to help agents balance losses of enemies and our side. Furthermore, this paper also uses the knowledge transfer method to extend the learning model to more complex scenes and improve the generalization of the proposed confrontation model. Two confrontation decision-making experiments are designed to verify the effectiveness of the proposed method. In a small-scale task scenario, the trained agent can successfully learn to fight with the competitors and achieve a good winning rate. For large-scale confrontation scenarios, the knowledge transfer method can gradually improve the decision-making level of the learning agent.


Sign in / Sign up

Export Citation Format

Share Document