Adaptive Optimal Control of a Grid-Independent Photovoltaic System

2003 ◽  
Vol 125 (1) ◽  
pp. 34-42 ◽  
Author(s):  
Gregor P. Henze ◽  
Robert H. Dodier

This paper investigates adaptive optimal control of a grid-independent photovoltaic system consisting of a collector, storage, and a load. The control algorithm is based on Q-Learning, a model-free reinforcement learning algorithm, which optimizes control performance through exploration. Q-Learning is used in a simulation study to find a policy which performs better than a conventional control strategy with respect to a cost function which places more weight on meeting a critical base load than on those non-critical loads exceeding the base load.

Solar Energy ◽  
2002 ◽  
Author(s):  
Gregor P. Henze ◽  
Robert H. Dodier

This paper investigates adaptive optimal control of a grid-independent photovoltaic system consisting of a collector, storage, and a load. The algorithm is based on Q-Learning, a model-free reinforcement learning algorithm, which optimizes control performance through exploration. Q-Learning is used in a simulation study to find a policy which performs better than a conventional control strategy with respect to a cost function which places more weight on meeting a critical base load than on those non-critical loads exceeding the base load.


2019 ◽  
Vol 2 (2) ◽  
pp. 15 ◽  
Author(s):  
Bettoni ◽  
Soppelsa ◽  
Fedrizzi ◽  
del Toro Matamoros

This paper discusses the development of a coupled Q-learning/fuzzy control algorithm to be applied to the control of solar domestic hot water systems. The controller brings the benefit of showing performance in line with the best reference controllers without the need for devoting time to modelling and simulations to tune its parameters before deployment. The performance of the proposed control algorithm was analysed in detail concerning the input membership function defining the fuzzy controller. The algorithm was compared to four standard reference control cases using three performance figures: the seasonal performance factor of the solar collectors, the seasonal performance factor of the system and the number of on/off cycles of the primary circulator. The work shows that the reinforced learning controller can find the best performing fuzzy controller within a family of controllers. It also shows how to increase the speed of the learning process by loading the controller with partial pre-existing information. The new controller performed significantly better than the best reference case with regard to the collectors’ performance factor (between 15% and 115%), and at the same time, to the number of on/off cycles of the primary circulator (1.2 per day down from 30 per day). Regarding the domestic hot water performance factor, the new controller performed about 11% worse than the best reference controller but greatly improved its on/off cycle figure (425 from 11,046). The decrease in performance was due to the choice of reward function, which was not selected for that purpose and it was blind to some of the factors influencing the system performance factor.


Author(s):  
Ki Uhn Ahn ◽  
Jae Min Kim ◽  
Youngsub Kim ◽  
Cheol Soo Park ◽  
Kwang Woo Kim

2020 ◽  
Vol 8 (6) ◽  
pp. 4333-4338

This paper presents a thorough comparative analysis of various reinforcement learning algorithms used by autonomous mobile robots for optimal path finding and, we propose a new algorithm called Iterative SARSA for the same. The main objective of the paper is to differentiate between the Q-learning and SARSA, and modify the latter. These algorithms use either the on-policy or off-policy methods of reinforcement learning. For the on-policy method, we have used the SARSA algorithm and for the off-policy method, the Q-learning algorithm has been used. These algorithms also have an impacting effect on finding the shortest path possible for the robot. Based on the results obtained, we have concluded how our algorithm is better than the current standard reinforcement learning algorithms


Sign in / Sign up

Export Citation Format

Share Document