scholarly journals Physics Informed Reinforcement Learning for Power Grid Control using Augmented Random Search

2022 ◽  
Author(s):  
Kaveri Mahapatra ◽  
Xiaoyuan Fan ◽  
Xinya Li ◽  
Yunzhi Huang ◽  
Qiuhua Huang
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Wei Guo ◽  
Kai Zhang ◽  
Xinjie Wei ◽  
Mei Liu

Short-term load forecasting is an important part to support the planning and operation of power grid, but the current load forecasting methods have the problem of poor adaptive ability of model parameters, which are difficult to ensure the demand for efficient and accurate power grid load forecasting. To solve this problem, a short-term load forecasting method for smart grid is proposed based on multilayer network model. This method uses the integrated empirical mode decomposition (IEMD) method to realize the orderly and reliable load state data and provides high-quality data support for the prediction network model. The enhanced network inception module is used to adaptively adjust the parameters of the deep neural network (DNN) prediction model to improve the fitting and tracking ability of the prediction network. At the same time, the introduction of hybrid particle swarm optimization algorithm further enhances the dynamic optimization ability of deep reinforcement learning model parameters and can realize the accurate prediction of short-term load of smart grid. The simulation results show that the mean absolute percentage error e MAPE and root-mean-square error e RMSE of the performance indexes of the prediction model are 10.01% and 2.156 MW, respectively, showing excellent curve fitting ability and load forecasting ability.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yanhua Yang ◽  
Ligang Yao

The safe and reliable operation of power grid equipment is the basis for ensuring the safe operation of the power system. At present, the traditional periodical maintenance has exposed the abuses such as deficient maintenance and excess maintenance. Based on a multiagent deep reinforcement learning decision-making optimization algorithm, a method for decision-making and optimization of power grid equipment maintenance plans is proposed. In this paper, an optimization model of power grid equipment maintenance plan that takes into account the reliability and economics of power grid operation is constructed with maintenance constraints and power grid safety constraints as its constraints. The deep distributed recurrent Q-networks multiagent deep reinforcement learning is adopted to solve the optimization model. The deep distributed recurrent Q-networks multiagent deep reinforcement learning uses the high-dimensional feature extraction capabilities of deep learning and decision-making capabilities of reinforcement learning to solve the multiobjective decision-making problem of power grid maintenance planning. Through case analysis, the comparative results show that the proposed algorithm has better optimization and decision-making ability, as well as lower maintenance cost. Accordingly, the algorithm can realize the optimal decision of power grid equipment maintenance plan. The expected value of power shortage and maintenance cost obtained by the proposed method is $71.75$ $MW·H$ and $496000$ $yuan$.


Energies ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 2291 ◽  
Author(s):  
Ying Ji ◽  
Jianhui Wang ◽  
Jiacan Xu ◽  
Xiaoke Fang ◽  
Huaguang Zhang

Driven by the recent advances and applications of smart-grid technologies, our electric power grid is undergoing radical modernization. Microgrid (MG) plays an important role in the course of modernization by providing a flexible way to integrate distributed renewable energy resources (RES) into the power grid. However, distributed RES, such as solar and wind, can be highly intermittent and stochastic. These uncertain resources combined with load demand result in random variations in both the supply and the demand sides, which make it difficult to effectively operate a MG. Focusing on this problem, this paper proposed a novel energy management approach for real-time scheduling of an MG considering the uncertainty of the load demand, renewable energy, and electricity price. Unlike the conventional model-based approaches requiring a predictor to estimate the uncertainty, the proposed solution is learning-based and does not require an explicit model of the uncertainty. Specifically, the MG energy management is modeled as a Markov Decision Process (MDP) with an objective of minimizing the daily operating cost. A deep reinforcement learning (DRL) approach is developed to solve the MDP. In the DRL approach, a deep feedforward neural network is designed to approximate the optimal action-value function, and the deep Q-network (DQN) algorithm is used to train the neural network. The proposed approach takes the state of the MG as inputs, and outputs directly the real-time generation schedules. Finally, using real power-grid data from the California Independent System Operator (CAISO), case studies are carried out to demonstrate the effectiveness of the proposed approach.


Author(s):  
Jorai Rijsdijk ◽  
Lichao Wu ◽  
Guilherme Perin ◽  
Stjepan Picek

Deep learning represents a powerful set of techniques for profiling sidechannel analysis. The results in the last few years show that neural network architectures like multilayer perceptron and convolutional neural networks give strong attack performance where it is possible to break targets protected with various countermeasures. Considering that deep learning techniques commonly have a plethora of hyperparameters to tune, it is clear that such top attack results can come with a high price in preparing the attack. This is especially problematic as the side-channel community commonly uses random search or grid search techniques to look for the best hyperparameters.In this paper, we propose to use reinforcement learning to tune the convolutional neural network hyperparameters. In our framework, we investigate the Q-Learning paradigm and develop two reward functions that use side-channel metrics. We mount an investigation on three commonly used datasets and two leakage models where the results show that reinforcement learning can find convolutional neural networks exhibiting top performance while having small numbers of trainable parameters. We note that our approach is automated and can be easily adapted to different datasets. Several of our newly developed architectures outperform the current state-of-the-art results. Finally, we make our source code publicly available. https://github.com/AISyLab/Reinforcement-Learning-for-SCA


2020 ◽  
Vol 34 (04) ◽  
pp. 3243-3250 ◽  
Author(s):  
Thomas Barrett ◽  
William Clements ◽  
Jakob Foerster ◽  
Alex Lvovsky

Many real-world problems can be reduced to combinatorial optimization on a graph, where the subset or ordering of vertices that maximize some objective function must be found. With such tasks often NP-hard and analytically intractable, reinforcement learning (RL) has shown promise as a framework with which efficient heuristic methods to tackle these problems can be learned. Previous works construct the solution subset incrementally, adding one element at a time, however, the irreversible nature of this approach prevents the agent from revising its earlier decisions, which may be necessary given the complexity of the optimization task. We instead propose that the agent should seek to continuously improve the solution by learning to explore at test time. Our approach of exploratory combinatorial optimization (ECO-DQN) is, in principle, applicable to any combinatorial problem that can be defined on a graph. Experimentally, we show our method to produce state-of-the-art RL performance on the Maximum Cut problem. Moreover, because ECO-DQN can start from any arbitrary configuration, it can be combined with other search methods to further improve performance, which we demonstrate using a simple random search.


2020 ◽  
Vol 12 (22) ◽  
pp. 9333
Author(s):  
Sangwook Han

This paper proposes a reinforcement learning-based approach that optimises bus and line control methods to solve the problem of short circuit currents in power systems. Expansion of power grids leads to concentrated power output and more lines for large-scale transmission, thereby increasing short circuit currents. The short circuit currents must be managed systematically by controlling the buses and lines such as separating, merging, and moving a bus, line, or transformer. However, there are countless possible control schemes in an actual grid. Moreover, to ensure compliance with power system reliability standards, no bus should exceed breaker capacity nor should lines or transformers be overloaded. For this reason, examining and selecting a plan requires extensive time and effort. To solve these problems, this paper introduces reinforcement learning to optimise control methods. By providing appropriate rewards for each control action, a policy is set, and the optimal control method is obtained through a maximising value method. In addition, a technique is presented that systematically defines the bus and line separation measures, limits the range of measures to those with actual power grid applicability, and reduces the optimisation time while increasing the convergence probability and enabling use in actual power grid operation. In the future, this technique will contribute significantly to establishing power grid operation plans based on short circuit currents.


Sign in / Sign up

Export Citation Format

Share Document