DeepComp: Deep reinforcement learning based renewable energy error compensable forecasting

2021 ◽  
Vol 294 ◽  
pp. 116970
Author(s):  
Jaeik Jeong ◽  
Hongseok Kim
Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2700
Author(s):  
Grace Muriithi ◽  
Sunetra Chowdhury

In the near future, microgrids will become more prevalent as they play a critical role in integrating distributed renewable energy resources into the main grid. Nevertheless, renewable energy sources, such as solar and wind energy can be extremely volatile as they are weather dependent. These resources coupled with demand can lead to random variations on both the generation and load sides, thus complicating optimal energy management. In this article, a reinforcement learning approach has been proposed to deal with this non-stationary scenario, in which the energy management system (EMS) is modelled as a Markov decision process (MDP). A novel modification of the control problem has been presented that improves the use of energy stored in the battery such that the dynamic demand is not subjected to future high grid tariffs. A comprehensive reward function has also been developed which decreases infeasible action explorations thus improving the performance of the data-driven technique. A Q-learning algorithm is then proposed to minimize the operational cost of the microgrid under unknown future information. To assess the performance of the proposed EMS, a comparison study between a trading EMS model and a non-trading case is performed using a typical commercial load curve and PV profile over a 24-h horizon. Numerical simulation results indicate that the agent learns to select an optimized energy schedule that minimizes energy cost (cost of power purchased from the utility and battery wear cost) in all the studied cases. However, comparing the non-trading EMS to the trading EMS model operational costs, the latter one was found to decrease costs by 4.033% in summer season and 2.199% in winter season.


Author(s):  
Philip Odonkor ◽  
Kemper Lewis

Abstract In the wake of increasing proliferation of renewable energy and distributed energy resources (DERs), grid designers and operators alike are faced with several emerging challenges in curbing allocative grid inefficiencies and maintaining operational stability. One such challenge relates to the increased price volatility within real-time electricity markets, a result of the inherent intermittency of renewable energy. With this challenge, however, comes heightened economic interest in exploiting the arbitrage potential of price volatility towards demand-side energy cost savings. To this end, this paper aims to maximize the arbitrage value of electricity through the optimal design of control strategies for DERs. Formulated as an arbitrage maximization problem using design optimization, and solved using reinforcement learning, the proposed approach is applied towards shared DERs within multi-building residential clusters. We demonstrate its feasibility across three unique building cluster demand profiles, observing notable energy cost reductions over baseline values. This highlights a capability for generalized learning across multiple building clusters and the ability to design efficient arbitrage policies towards energy cost minimization. Finally, the approach is shown to be computationally tractable, designing efficient strategies in approximately 5 hours of training over a simulation time horizon of 1 month.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 270
Author(s):  
Mari Carmen Domingo

Unmanned Aerial Vehicle (UAV)-assisted cellular networks over the millimeter-wave (mmWave) frequency band can meet the requirements of a high data rate and flexible coverage in next-generation communication networks. However, higher propagation loss and the use of a large number of antennas in mmWave networks give rise to high energy consumption and UAVs are constrained by their low-capacity onboard battery. Energy harvesting (EH) is a viable solution to reduce the energy cost of UAV-enabled mmWave networks. However, the random nature of renewable energy makes it challenging to maintain robust connectivity in UAV-assisted terrestrial cellular networks. Energy cooperation allows UAVs to send their excessive energy to other UAVs with reduced energy. In this paper, we propose a power allocation algorithm based on energy harvesting and energy cooperation to maximize the throughput of a UAV-assisted mmWave cellular network. Since there is channel-state uncertainty and the amount of harvested energy can be treated as a stochastic process, we propose an optimal multi-agent deep reinforcement learning algorithm (DRL) named Multi-Agent Deep Deterministic Policy Gradient (MADDPG) to solve the renewable energy resource allocation problem for throughput maximization. The simulation results show that the proposed algorithm outperforms the Random Power (RP), Maximal Power (MP) and value-based Deep Q-Learning (DQL) algorithms in terms of network throughput.


Sign in / Sign up

Export Citation Format

Share Document