scholarly journals Design and Simulation of Adaptive PID Controller Based on Fuzzy Q-Learning Algorithm for a BLDC Motor

Author(s):  
Reza Rouhi Ardeshiri ◽  
Nabi Nabiyev ◽  
Shahab S. Band ◽  
Amir Mosavi

Reinforcement learning (RL) is an extensively applied control method for the purpose of designing intelligent control systems to achieve high accuracy as well as better performance. In the present article, the PID controller is considered as the main control strategy for brushless DC (BLDC) motor speed control. For better performance, the fuzzy Q-learning (FQL) method as a reinforcement learning approach is proposed to adjust the PID coefficients. A comparison with the adaptive PID (APID) controller is also performed for the superiority of the proposed method, and the findings demonstrate the reduction of the error of the proposed method and elimination of the overshoot for controlling the motor speed. MATLAB/SIMULINK has been used for modeling, simulation, and control design of the BLDC motor.

Author(s):  
Mohd Syakir Adli ◽  
Noor Hazrin Hany Mohamad Hanif ◽  
Siti Fauziah Toha Tohara

<p>This paper presents a control scheme for speed control system in brushless dc (BLDC) motor to be utilized for electric motorbike. While conventional motorbikes require engine and fuel, electric motorbikes require DC motor and battery pack in order to be powered up. The limitation with battery pack is that it will need to be recharged after a certain period and distance. As the recharging process is time consuming, a PID controller is designed to maintain the speed of the motor at its optimum state, thus ensuring a longer lasting battery time (until the next charge). The controller is designed to track variations of speed references and stabilizes the output speed accordingly. The simulation results conducted in MATLAB/SIMULINK® shows that the motor, equipped with the PID controller was able to track the reference speed in 7.8x10<sup>-2</sup> milliseconds with no overshoot.  The result shows optimistic possibility that the proposed controller can be used to maintain the speed of the motor at its optimum speed.</p>


In this project, mathematical model of the Brushless DC motor (BLDC) is developed and the closed-loop Fuzzy PID controller has been simulated in MATLAB-Simulink environment. The three-phase (BLDC) is developed and the DC power is supplied to this machine though six step inverter whose switching state is controlled by the hall signal. The hall effect sensor senses the rotor posit ion of the motor and it generates binary digit number which is decoded and given to the six-step inverter. The mathematical model is developed using the back emf equations and torque equation of the BLDC motor. The PI controller doesn’t operate properly during dynamic state and hence the fuzzy-PID-controller is better option to control and regulate the speed of the BLDC motor which has high performance in comparison to the PI controller. And, we can get the smooth speed-torque characteristics using Fuzzy PID controller.


Author(s):  
Xiaoyuan Wang ◽  
Tao Fu ◽  
Xiaoguang Wang

Brushless DC (BLDC) motors are widely used for many industrial applications because of their high efficiency, high torque and low volume. In view of the problem that the current control method of speed regulation system of BLDC motor has poor control effect caused by fixed parameters of PID controller, an adaptive PID algorithm with quadratic single neuron (QSN) was designed. Quadratic performance index was introduced in adjustment of weight coefficients; expected optimization effect was gotten by calculating control law. QSN adaptive PID controller can change its parameters online when operating conditions are changed, it can also change its control characteristic automatically. Matlab simulations and experiment results showed that the proposed approach has less overshoot, faster response, stronger ability of anti-disturbance, the results also showed more effectiveness and efficiency than the conventional PID model in motor speed control.


Author(s):  
Run Ma ◽  

With the advancement in internet technologies, requirements for quality of indoor wireless communication have increased. Femtocell, which is an effective approach to improve indoor communication quality, can provide highly-efficient indoor network services for users. This study puts forward a power resource control method based on Q learning algorithm for improved solutions to the problems of frequency spectrum and power resource allocation of a two-tier femtocell network. The algorithm was further improved, and was compared with the traditional algorithm via a simulation experiment. It was found that the improved Q learning algorithm could enhance the message capacity and control power resource; this provides a reference for the application of Q learning algorithm in femtocell communication.


Author(s):  
Meena Devi R. ◽  
L. Premalatha

A novel speed controller for the three-phase Brushless DC (BLDC) Motor Drive is proposed using a closed-loop AC-DC Bridgeless SEPIC Converter in continuous Conduction mode. This design proposes a single stage AC-DC converter with ON and OFF state equivalent circuits for 400W, 48V at 2450 rpm PMBLDC motor drive. The Fuzzy based voltage and current controlling method is proposed in this design. The voltage controlling method is used to control the speed for BLDC motor and the current controlling method is used to improve the power factor in AC supply. The speed of BLDC motor is observed with voltage disturbance and the constant motor speed is maintained. The proposed control method on SEPIC converter fed PMBLDC motor drive is modeled by Simulink/Matlab.


Author(s):  
Adel Ahmed Obed ◽  
Ameer Lateef Saleh ◽  
Abbas Kareem Kadhim

<span>In this paper, several methods are developed to control the brushless DC (BLDC) motor speed. Since it is difficult to get a good showing by utilizing classical PID controller, the Dynamic Wavelet Neural Network (DWNN) is the proposed work in this paper, with parallel PID controller to obtain an novel controller named DWNN-PID controller. It collects the artificial neural ability of its networks for imparting from motor of BLDC with drive system and the ability of identification for the wavelet decomposition and control of the dynamic system furthermore to have ability for adapting and self-learning.  The suggested controller method is utilizing to control the speed of BLDC motor of which supply a better showing than utilizing classical controllers with a wide range of control. The proposed controller parameters are matched continuously using Particle Swarm Optimization (PSO) algorithm. The simulation results based on proposed DWNN-PID controller demonstrate a superior in the stability and performance compared at utilizing classical WNN-PID and conventional PID controllers. The simulation results are accomplished using Matlab/Simulink. It shows that the proposed control scheme has a superior performance.</span>


Author(s):  
Muhammed A. Ibrahim ◽  
Ausama Kh. Mahmood ◽  
Nashwan Saleh Sultan

Brushless DC (BLDC) motor is commonly employed for many industrial applications due to their high torque and efficiency. This article produces an optimal designed controller of Brushless DC motor speed control depending on the genetic algorithm (GA). The optimization method is used for searching of the ideal Proportional–Integral-Derivative (PID) factors. The controller design methods of brushless DC motor includes three kinds: trial and error PID design, auto-tuning PID design and genetic algorithm based controller design. A PID controller is utilizing by conducted Integral absolute error criterion (IAE) and integral squared error (ISE) error criterion for BLDC motor control system. A GA-PID controller is designed to enhance the system performance by means of genetic algorithm. PID controller coefficients are calculated by GA to produce optimal PID as  hybrid PID with GA controller .The closed loop speed response of PID controller is experimented  for IAE and ISE error criteria. The suggested controller GA_PID is planned, modeled and simulated by MATLAB/ software program. A comparison output system performance monitored for every controller schemes. The results display that the time characteristics performance of GA-PID controller based on ISE objective function has the optimal performance (rise time, settling time, percentage overshoot) with other techniques.


Author(s):  
Daniel Christianto ◽  
Cuk Supriyadi Ali Nandar ◽  
Widi Setiawan

Greek yogurt production needs a straining process that takes 10 hours or more. This paper proposes automation and control method for the centrifugation system to speed up the process time and to optimize the accuracy of quantity of whey drainage. Using system identification, the estimated mathematical model of straining process has been developed based on the traditional process of straining the yogurt. Then, the simulation and control design optimization has been carried out by using the estimated mathematical model. Based on the simulation results using whey mass controller, motor speed controller, and the combination of whey mass and motor speed controller, the controller that used are PID controller and fuzzy logic controller. The fastest controller is a PID controller as motor speed controller and fuzzy logic controller as whey mass controller that can speed up the production time and optimize the accuracy of quantity of whey drainage.


Aerospace ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 113
Author(s):  
Pedro Andrade ◽  
Catarina Silva ◽  
Bernardete Ribeiro ◽  
Bruno F. Santos

This paper presents a Reinforcement Learning (RL) approach to optimize the long-term scheduling of maintenance for an aircraft fleet. The problem considers fleet status, maintenance capacity, and other maintenance constraints to schedule hangar checks for a specified time horizon. The checks are scheduled within an interval, and the goal is to, schedule them as close as possible to their due date. In doing so, the number of checks is reduced, and the fleet availability increases. A Deep Q-learning algorithm is used to optimize the scheduling policy. The model is validated in a real scenario using maintenance data from 45 aircraft. The maintenance plan that is generated with our approach is compared with a previous study, which presented a Dynamic Programming (DP) based approach and airline estimations for the same period. The results show a reduction in the number of checks scheduled, which indicates the potential of RL in solving this problem. The adaptability of RL is also tested by introducing small disturbances in the initial conditions. After training the model with these simulated scenarios, the results show the robustness of the RL approach and its ability to generate efficient maintenance plans in only a few seconds.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 737
Author(s):  
Fengjie Sun ◽  
Xianchang Wang ◽  
Rui Zhang

An Unmanned Aerial Vehicle (UAV) can greatly reduce manpower in the agricultural plant protection such as watering, sowing, and pesticide spraying. It is essential to develop a Decision-making Support System (DSS) for UAVs to help them choose the correct action in states according to the policy. In an unknown environment, the method of formulating rules for UAVs to help them choose actions is not applicable, and it is a feasible solution to obtain the optimal policy through reinforcement learning. However, experiments show that the existing reinforcement learning algorithms cannot get the optimal policy for a UAV in the agricultural plant protection environment. In this work we propose an improved Q-learning algorithm based on similar state matching, and we prove theoretically that there has a greater probability for UAV choosing the optimal action according to the policy learned by the algorithm we proposed than the classic Q-learning algorithm in the agricultural plant protection environment. This proposed algorithm is implemented and tested on datasets that are evenly distributed based on real UAV parameters and real farm information. The performance evaluation of the algorithm is discussed in detail. Experimental results show that the algorithm we proposed can efficiently learn the optimal policy for UAVs in the agricultural plant protection environment.


Sign in / Sign up

Export Citation Format

Share Document