Hierarchical Simulated Annealing-Reinforcement Learning Energy Management for Smart Grids

2013 ◽  
Vol 805-806 ◽  
pp. 1206-1209 ◽  
Author(s):  
Xin Li ◽  
Chuan Zhi Zang ◽  
Xiao Ning Qin ◽  
Yang Zhang ◽  
Dan Yu

For energy management problems in smart grid, a hybrid intelligent hierarchical controller based on simulated annealing (SA) and reinforcement learning (RL) is proposed. The SA is used to adjust the parameters of the controller. The RL algorithm shows the particular superiority, which is independent of the mathematic model and just needs simple fuzzy information obtained through trial-and-error and interaction with the environment. By means of learning procedures, the proposed controller can learn to take the best actions to regulate the energy usage for equipments with the features of high comfortable for energy usage and low electric charge meanwhile. Simulation results show that the proposed load controller can promote the performance energy usage in smart grids.

2013 ◽  
Vol 860-863 ◽  
pp. 2423-2426
Author(s):  
Xin Li ◽  
Dan Yu ◽  
Chuan Zhi Zang

As the improvement of smart grids, the customer participation has reinvigorated interest in demand-side features such as load control for domestic users. A genetic based reinforcement learning (RL) load controller is proposed. The genetic is used to adjust the parameters of the controller. The RL algorithm, which is independent of the mathematic model, shows the particular superiority in load control. By means of learning procedures, the proposed controller can learn to take the best actions to regulate the energy usage for equipments with the features of high comfortable for energy usage and low electric charge meanwhile. Simulation results show that the proposed load controller can promote the performance energy usage in smart grids.


2020 ◽  
Vol 71 (6) ◽  
pp. 368-378
Author(s):  
Selahattin Kosunalp ◽  
Kubilay Demir

AbstractThe IoT environment includes the enormous amount of atomic services with dynamic QoS compared with traditional web services. In such an environment, in the service composition process, discovering a requested service meeting the required QoS is a di cult task. In this work, to address this issue, we propose a peer-to-peer-based service discovery model, which looks for the information about services meeting the requested QoS and functionality on an overlay constructed with users of services versus service nodes, with probably constrained resources. However, employing a plain discovery algorithm on the overlay network such as flooding, or k-random walk could cause high message overhead or delay. This necessitates an intelligent and adaptive discovery algorithm, which adapts itself based on users’ previous queries and the results. To fill this gap, the proposed service discovery approach is equipped with a reinforcement learning-based algorithm, named SARL. The reinforcement learning-based algorithm enables SARL to significantly reduce delay and message overhead in the service discovery process by ranking neighboring nodes based on users’ service request preferences and the service query results. The proposed model is implemented on the OMNet simulation platform. The simulation results demonstrate that SARL remarkably outperforms the existing approaches in terms of message overhead, reliability, timeliness, and energy usage efficiency.


Author(s):  
Yaodong Yang ◽  
Jianye Hao ◽  
Yan Zheng ◽  
Chao Yu

Smart grids are contributing to the demand-side management by integrating electronic equipment, distributed energy generation and storage and advanced meters and controllers. With the increasing adoption of electric vehicles and distributed energy generation and storage systems, residential energy management is drawing more and more attention, which is regarded as being critical to demand-supply balancing and peak load reduction. In this paper, we focus on a microgrid scenario in which modern homes interact together under a large-scale setting to better optimize their electricity cost. We first make households form a group with an economic stimulus. Then we formulate the energy expense optimization problem of the household community as a multi-agent coordination problem and present an Entropy-Based Collective Multiagent Deep Reinforcement Learning (EB-C-MADRL) framework to address it. Experiments with various real-world data demonstrate that EB-C-MADRL can reduce both the long-term group power consumption cost and daily peak demand effectively compared with existing approaches.


Author(s):  
Hongbo Zou ◽  
Juan Tao ◽  
Salah K. Elsayed ◽  
Ehab E. Elattar ◽  
Abdulaziz Almalaq ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document