scholarly journals Q-Learning Induced Artificial Bee Colony for Noisy Optimization

Author(s):  
Pratyusha Rakshit ◽  
Amit Konar ◽  
Atulya K. Nagar
Author(s):  
Sima Saeed ◽  
Aliakbar Niknafs

A new method for reinforcement fuzzy controllers is presented by this article. The method uses Artificial Bee Colony algorithm based on Q-Value to control reinforcement fuzzy system; the algorithm is called Artificial Bee Colony-Fuzzy Q learning (ABC-FQ). In fuzzy inference system, precondition part of rules is generated by prior knowledge, but ABC-FQ algorithm is responsible to achieve the best combination of actions for the consequence part of the rules. In ABC-FQ algorithm, each combination of actions is considered a food source for consequence part of the rules and the fitness level of this food source is determined by Q-Value. ABC-FQ Algorithm selects the best food resource, which is the best combination of actions for fuzzy system, using Q criterion. This algorithm tries to generate the best reinforcement fuzzy system to control the agent. ABC-FQ algorithm is used to solve the problem of Truck Backer-Upper Control, a reinforcement fuzzy control. The results have indicated that this method arrives to a result with higher speed and fewer trials in comparison to previous methods.


2019 ◽  
Vol 6 (4) ◽  
pp. 43
Author(s):  
HADIR ADEBIYI BUSAYO ◽  
TIJANI SALAWUDEEN AHMED ◽  
FOLASHADE O. ADEBIYI RISIKAT ◽  
◽  
◽  
...  

2020 ◽  
Vol 38 (9A) ◽  
pp. 1384-1395
Author(s):  
Rakaa T. Kamil ◽  
Mohamed J. Mohamed ◽  
Bashra K. Oleiwi

A modified version of the artificial Bee Colony Algorithm (ABC) was suggested namely Adaptive Dimension Limit- Artificial Bee Colony Algorithm (ADL-ABC). To determine the optimum global path for mobile robot that satisfies the chosen criteria for shortest distance and collision–free with circular shaped static obstacles on robot environment. The cubic polynomial connects the start point to the end point through three via points used, so the generated paths are smooth and achievable by the robot. Two case studies (or scenarios) are presented in this task and comparative research (or study) is adopted between two algorithm’s results in order to evaluate the performance of the suggested algorithm. The results of the simulation showed that modified parameter (dynamic control limit) is avoiding static number of limit which excludes unnecessary Iteration, so it can find solution with minimum number of iterations and less computational time. From tables of result if there is an equal distance along the path such as in case A (14.490, 14.459) unit, there will be a reduction in time approximately to halve at percentage 5%.


Sign in / Sign up

Export Citation Format

Share Document