scholarly journals Adaptive PID controller based on Q ‐learning algorithm

2018 ◽  
Vol 3 (4) ◽  
pp. 235-244 ◽  
Author(s):  
Qian Shi ◽  
Hak‐Keung Lam ◽  
Bo Xiao ◽  
Shun‐Hung Tsai
Author(s):  
Reza Rouhi Ardeshiri ◽  
Nabi Nabiyev ◽  
Shahab S. Band ◽  
Amir Mosavi

Reinforcement learning (RL) is an extensively applied control method for the purpose of designing intelligent control systems to achieve high accuracy as well as better performance. In the present article, the PID controller is considered as the main control strategy for brushless DC (BLDC) motor speed control. For better performance, the fuzzy Q-learning (FQL) method as a reinforcement learning approach is proposed to adjust the PID coefficients. A comparison with the adaptive PID (APID) controller is also performed for the superiority of the proposed method, and the findings demonstrate the reduction of the error of the proposed method and elimination of the overshoot for controlling the motor speed. MATLAB/SIMULINK has been used for modeling, simulation, and control design of the BLDC motor.


2021 ◽  
Author(s):  
Seyed Ali Hosseini ◽  
Karim Salahshoor

Systems are continually subjected to faults or malfunctions because of age or sudden events, which might degrade the operation performance and even result in operation failure that is a quite important issue in safety-critical systems. Thus, this important problem is the main reason to use the Fault-Tolerant strategy to improve the system’s performance with the presence of faults. A fascinating property in Fault-Tolerant Controllers (FTCs) is adaptability to system changes as they evolve throughout system operations. In this paper, a Q-learning algorithm with a greedy policy was used to realize the FTC adaptability. Then, some fault scenarios are introduced in a Continuous Stirred Tank Heater (CSTH) to compare the closed-loop performance of the developed Q-learning-based FTC with concerning conventional PID controller and an RL-based FTC. The obtained results show the effectiveness of Q-learningbased FTC in different fault scenarios.


2009 ◽  
Vol 28 (12) ◽  
pp. 3268-3270
Author(s):  
Chao WANG ◽  
Jing GUO ◽  
Zhen-qiang BAO

Aerospace ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 113
Author(s):  
Pedro Andrade ◽  
Catarina Silva ◽  
Bernardete Ribeiro ◽  
Bruno F. Santos

This paper presents a Reinforcement Learning (RL) approach to optimize the long-term scheduling of maintenance for an aircraft fleet. The problem considers fleet status, maintenance capacity, and other maintenance constraints to schedule hangar checks for a specified time horizon. The checks are scheduled within an interval, and the goal is to, schedule them as close as possible to their due date. In doing so, the number of checks is reduced, and the fleet availability increases. A Deep Q-learning algorithm is used to optimize the scheduling policy. The model is validated in a real scenario using maintenance data from 45 aircraft. The maintenance plan that is generated with our approach is compared with a previous study, which presented a Dynamic Programming (DP) based approach and airline estimations for the same period. The results show a reduction in the number of checks scheduled, which indicates the potential of RL in solving this problem. The adaptability of RL is also tested by introducing small disturbances in the initial conditions. After training the model with these simulated scenarios, the results show the robustness of the RL approach and its ability to generate efficient maintenance plans in only a few seconds.


Sign in / Sign up

Export Citation Format

Share Document