Real-time Energy Management of Microgrid Using Reinforcement Learning

Author(s):  
Wenzheng Bi ◽  
Yuankai Shu ◽  
Wei Dong ◽  
Qiang Yang
Energies ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 2291 ◽  
Author(s):  
Ying Ji ◽  
Jianhui Wang ◽  
Jiacan Xu ◽  
Xiaoke Fang ◽  
Huaguang Zhang

Driven by the recent advances and applications of smart-grid technologies, our electric power grid is undergoing radical modernization. Microgrid (MG) plays an important role in the course of modernization by providing a flexible way to integrate distributed renewable energy resources (RES) into the power grid. However, distributed RES, such as solar and wind, can be highly intermittent and stochastic. These uncertain resources combined with load demand result in random variations in both the supply and the demand sides, which make it difficult to effectively operate a MG. Focusing on this problem, this paper proposed a novel energy management approach for real-time scheduling of an MG considering the uncertainty of the load demand, renewable energy, and electricity price. Unlike the conventional model-based approaches requiring a predictor to estimate the uncertainty, the proposed solution is learning-based and does not require an explicit model of the uncertainty. Specifically, the MG energy management is modeled as a Markov Decision Process (MDP) with an objective of minimizing the daily operating cost. A deep reinforcement learning (DRL) approach is developed to solve the MDP. In the DRL approach, a deep feedforward neural network is designed to approximate the optimal action-value function, and the deep Q-network (DQN) algorithm is used to train the neural network. The proposed approach takes the state of the MG as inputs, and outputs directly the real-time generation schedules. Finally, using real power-grid data from the California Independent System Operator (CAISO), case studies are carried out to demonstrate the effectiveness of the proposed approach.


Author(s):  
Yujian Ye ◽  
Dawei Qiu ◽  
Jonathan Ward ◽  
Marcin Abram

The problem of real-time autonomous energy management is an application area that is receiving unprecedented attention from consumers, governments, academia, and industry. This paper showcases the first application of deep reinforcement learning (DRL) to real-time autonomous energy management for a multi-carrier energy system. The proposed approach is tailored to align with the nature of the energy management problem by posing it in multi-dimensional continuous state and action spaces, in order to coordinate power flows between different energy devices, and to adequately capture the synergistic effect of couplings between different energy carriers. This fundamental contribution is a significant step forward from earlier approaches that only sought to control the power output of a single device and neglected the demand-supply coupling of different energy carriers. Case studies on a real-world scenario demonstrate that the proposed method significantly outperforms existing DRL methods as well as model-based control approaches in achieving the lowest energy cost and yielding a representation of energy management policies that adapt to system uncertainties.


2016 ◽  
Vol 171 ◽  
pp. 372-382 ◽  
Author(s):  
Yuan Zou ◽  
Teng Liu ◽  
Dexing Liu ◽  
Fengchun Sun

Sign in / Sign up

Export Citation Format

Share Document