Modelling a sequence of decisions to achieve potential and complex environment in deep Reinforcement Learning for allowing devices to communicate in modish (smart) grids

2020 ◽  
Vol 14 (6) ◽  
2013 ◽  
Vol 860-863 ◽  
pp. 2423-2426
Author(s):  
Xin Li ◽  
Dan Yu ◽  
Chuan Zhi Zang

As the improvement of smart grids, the customer participation has reinvigorated interest in demand-side features such as load control for domestic users. A genetic based reinforcement learning (RL) load controller is proposed. The genetic is used to adjust the parameters of the controller. The RL algorithm, which is independent of the mathematic model, shows the particular superiority in load control. By means of learning procedures, the proposed controller can learn to take the best actions to regulate the energy usage for equipments with the features of high comfortable for energy usage and low electric charge meanwhile. Simulation results show that the proposed load controller can promote the performance energy usage in smart grids.


2013 ◽  
Vol 805-806 ◽  
pp. 1206-1209 ◽  
Author(s):  
Xin Li ◽  
Chuan Zhi Zang ◽  
Xiao Ning Qin ◽  
Yang Zhang ◽  
Dan Yu

For energy management problems in smart grid, a hybrid intelligent hierarchical controller based on simulated annealing (SA) and reinforcement learning (RL) is proposed. The SA is used to adjust the parameters of the controller. The RL algorithm shows the particular superiority, which is independent of the mathematic model and just needs simple fuzzy information obtained through trial-and-error and interaction with the environment. By means of learning procedures, the proposed controller can learn to take the best actions to regulate the energy usage for equipments with the features of high comfortable for energy usage and low electric charge meanwhile. Simulation results show that the proposed load controller can promote the performance energy usage in smart grids.


2020 ◽  
Author(s):  
Ao Chen ◽  
Taresh Dewan ◽  
Manva Trivedi ◽  
Danning Jiang ◽  
Aloukik Aditya ◽  
...  

This paper provides a comparative analysis between Deep Q Network (DQN) and Double Deep Q Network (DDQN) algorithms based on their hit rate, out of which DDQN proved to be better for Breakout game. DQN is chosen over Basic Q learning because it understands policy learning using its neural network which is good for complex environment and DDQN is chosen as it solves overestimation problem (agent always choses non-optimal action for any state just because it has maximum Q-value) occurring in basic Q-learning.


2020 ◽  
Author(s):  
Ao Chen ◽  
Taresh Dewan ◽  
Manva Trivedi ◽  
Danning Jiang ◽  
Aloukik Aditya ◽  
...  

This paper provides a comparative analysis between Deep Q Network (DQN) and Double Deep Q Network (DDQN) algorithms based on their hit rate, out of which DDQN proved to be better for Breakout game. DQN is chosen over Basic Q learning because it understands policy learning using its neural network which is good for complex environment and DDQN is chosen as it solves overestimation problem (agent always choses non-optimal action for any state just because it has maximum Q-value) occurring in basic Q-learning.


Sign in / Sign up

Export Citation Format

Share Document