scholarly journals Optimization of PID Controller to Stabilize Quadcopter Movements Using Meta-Heuristic Search Algorithms

2021 ◽  
Vol 11 (14) ◽  
pp. 6492
Author(s):  
Alaa Sheta ◽  
Malik Braik ◽  
Dheeraj Reddy Maddi ◽  
Ahmed Mahdy ◽  
Sultan Aljahdali ◽  
...  

Quadrotor UAVs are one of the most preferred types of small unmanned aerial vehicles, due to their modest mechanical structure and propulsion precept. However, the complex non-linear dynamic behavior of the Proportional Integral Derivative (PID) controller in these vehicles requires advanced stabilizing control of their movement. Additionally, locating the appropriate gain for a model-based controller is relatively complex and demands a significant amount of time, as it relies on external perturbations and the dynamic modeling of plants. Therefore, developing a method for the tuning of quadcopter PID parameters may save effort and time, and better control performance can be realized. Traditional methods, such as Ziegler–Nichols (ZN), for tuning quadcopter PID do not provide optimal control and might leave the system with potential instability and cause significant damage. One possible approach that alleviates the tough task of nonlinear control design is the use of meta-heuristics that permit appropriate control actions. This study presents PID controller tuning using meta-heuristic algorithms, such as Genetic Algorithms (GAs), the Crow Search Algorithm (CSA) and Particle Swarm Optimization (PSO) to stabilize quadcopter movements. These meta-heuristics were used to control the position and orientation of a PID controller based on a fitness function proposed to reduce overshooting by predicting future paths. The obtained results confirmed the efficacy of the proposed controller in felicitously and reliably controlling the flight of a quadcopter based on GA, CSA and PSO. Finally, the simulation results related to quadcopter movement control using PSO presented impressive control results, compared to GA and CSA.

Author(s):  
Bryon Kucharski ◽  
Azad Deihim ◽  
Mehmet Ergezer

This research was conducted by an interdisciplinary team of two undergraduate students and a faculty to explore solutions to the Birds of a Feather (BoF) Research Challenge. BoF is a newly-designed perfect-information solitaire-type game. The focus of the study was to design and implement different algorithms and evaluate their effectiveness. The team compared the provided depth-first search (DFS) to heuristic algorithms such as Monte Carlo tree search (MCTS), as well as a novel heuristic search algorithm guided by machine learning. Since all of the studied algorithms converge to a solution from a solvable deal, effectiveness of each approach was measured by how quickly a solution was reached, and how many nodes were traversed until a solution was reached. The employed methods have a potential to provide artificial intelligence enthusiasts with a better understanding of BoF and novel ways to solve perfect-information games and puzzles in general. The results indicate that the proposed heuristic search algorithms guided by machine learning provide a significant improvement in terms of number of nodes traversed over the provided DFS algorithm.


2020 ◽  
Vol 34 (03) ◽  
pp. 2327-2334
Author(s):  
Vidal Alcázar ◽  
Pat Riddle ◽  
Mike Barley

In the past few years, new very successful bidirectional heuristic search algorithms have been proposed. Their key novelty is a lower bound on the cost of a solution that includes information from the g values in both directions. Kaindl and Kainz (1997) proposed measuring how inaccurate a heuristic is while expanding nodes in the opposite direction, and using this information to raise the f value of the evaluated nodes. However, this comes with a set of disadvantages and remains yet to be exploited to its full potential. Additionally, Sadhukhan (2013) presented BAE∗, a bidirectional best-first search algorithm based on the accumulated heuristic inaccuracy along a path. However, no complete comparison in regards to other bidirectional algorithms has yet been done, neither theoretical nor empirical. In this paper we define individual bounds within the lower-bound framework and show how both Kaindl and Kainz's and Sadhukhan's methods can be generalized thus creating new bounds. This overcomes previous shortcomings and allows newer algorithms to benefit from these techniques as well. Experimental results show a substantial improvement, up to an order of magnitude in the number of necessarily-expanded nodes compared to state-of-the-art near-optimal algorithms in common benchmarks.


2015 ◽  
Vol 73 (3) ◽  
Author(s):  
Mohamad Saiful Islam Aziz ◽  
Sophan Wahyudi Nawawi ◽  
Shahdan Sudin ◽  
Norhaliza Abdul Wahab ◽  
Mahdi Faramarzi ◽  
...  

This paper presents a new approach of optimization technique in the controller parameter tuning for waste-water treatment process (WWTP) application. In the case study of WWTP, PID controller is used to control substrate (S) and dissolved oxygen (DO) concentration level. Too many parameters that need to be controlled make the system becomes complicated. Gravitational Search Algorithm (GSA) is used as the main method for PID controller tuning process. GSA is based on Newton's Law of Gravity and mass interaction. In this algorithm, the searcher agents survey the masses that interact with each other using law of gravity and law of motion. For WWTP system, the activated sludge reactor is used and this system is multi-input multi-output (MIMO) process. MATLAB is used as the platform to perform the simulation, where this optimization is compared to other established optimization method such as the Particle Swarm Optimization (PSO) to determine whether GSA has better features compared to PSO or vice-versa. Based on this case-study, the results show that transient response of GSA-PID was 20%-30% better compared to transient response of the PSO-PID controller.


2019 ◽  
Vol 34 (21) ◽  
pp. 1950169
Author(s):  
Aihan Yin ◽  
Kemeng He ◽  
Ping Fan

Among many classic heuristic search algorithms, the Grover quantum search algorithm (QSA) can play a role of secondary acceleration. Based on the properties of the two-qubit Grover QSA, a quantum dialogue (QD) protocol is proposed. In addition, our protocol also utilizes the unitary operations and single-particle measurements. The transmitted quantum state (except for the decoy state used for detection) can transmit two-bits of security information simultaneously. Theoretical analysis shows that the proposed protocol has high security.


2020 ◽  
Vol 34 (06) ◽  
pp. 9827-9834
Author(s):  
Maximilian Fickert ◽  
Tianyi Gu ◽  
Leonhard Staut ◽  
Wheeler Ruml ◽  
Joerg Hoffmann ◽  
...  

Suboptimal heuristic search algorithms can benefit from reasoning about heuristic error, especially in a real-time setting where there is not enough time to search all the way to a goal. However, current reasoning methods implicitly or explicitly incorporate assumptions about the cost-to-go function. We consider a recent real-time search algorithm, called Nancy, that manipulates explicit beliefs about the cost-to-go. The original presentation of Nancy assumed that these beliefs are Gaussian, with parameters following a certain form. In this paper, we explore how to replace these assumptions with actual data. We develop a data-driven variant of Nancy, DDNancy, that bases its beliefs on heuristic performance statistics from the same domain. We extend Nancy and DDNancy with the notion of persistence and prove their completeness. Experimental results show that DDNancy can perform well in domains in which the original assumption-based Nancy performs poorly.


2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Zheng-Cai Lu ◽  
Zheng Qin ◽  
Qiao Jing ◽  
Lai-Xiang Shan

Attribute reduction is one of the challenging problems facing the effective application of computational intelligence technology for artificial intelligence. Its task is to eliminate dispensable attributes and search for a feature subset that possesses the same classification capacity as that of the original attribute set. To accomplish efficient attribute reduction, many heuristic search algorithms have been developed. Most of them are based on the model that the approximation of all the target concepts associated with a decision system is dividable into that of a single target concept represented by a pair of definable concepts known as lower and upper approximations. This paper proposes a novel model called macroscopic approximation, considering all the target concepts as an indivisible whole to be approximated by rough set boundary region derived from inconsistent tolerance blocks, as well as an efficient approximation framework called positive macroscopic approximation (PMA), addressing macroscopic approximations with respect to a series of attribute subsets. Based on PMA, a fast heuristic search algorithm for attribute reduction in incomplete decision systems is designed and achieves obviously better computational efficiency than other available algorithms, which is also demonstrated by the experimental results.


2016 ◽  
Vol 57 ◽  
pp. 229-271 ◽  
Author(s):  
Marcel Steinmetz ◽  
Jörg Hoffmann ◽  
Olivier Buffet

Unavoidable dead-ends are common in many probabilistic planning problems, e.g. when actions may fail or when operating under resource constraints. An important objective in such settings is MaxProb, determining the maximal probability with which the goal can be reached, and a policy achieving that probability. Yet algorithms for MaxProb probabilistic planning are severely underexplored, to the extent that there is scant evidence of what the empirical state of the art actually is. We close this gap with a comprehensive empirical analysis. We design and explore a large space of heuristic search algorithms, systematizing known algorithms and contributing several new algorithm variants. We consider MaxProb, as well as weaker objectives that we baptize AtLeastProb (requiring to achieve a given goal probabilty threshold) and ApproxProb (requiring to compute the maximum goal probability up to a given accuracy). We explore both the general case where there may be 0-reward cycles, and the practically relevant special case of acyclic planning, such as planning with a limited action-cost budget. We design suitable termination criteria, search algorithm variants, dead-end pruning methods using classical planning heuristics, and node selection strategies. We design a benchmark suite comprising more than 1000 instances adapted from the IPPC, resource-constrained planning, and simulated penetration testing. Our evaluation clarifies the state of the art, characterizes the behavior of a wide range of heuristic search algorithms, and demonstrates significant benefits of our new algorithm variants.


Author(s):  
Anuradha Chug ◽  
Sandhya Tarwani

Bad smells represent imperfection in the design of the software system and trigger the urge to refactor the source code. The quality of object-oriented software has always been a major concern for the developer team and refactoring techniques help them to focus on this aspect by transforming the code in a way such that the behavior of the software can be preserved. Rigorous research has been done in this field to improve the quality of the software using various techniques. But, one of the issues still remains unsettled, i.e. the overhead effort to refactor the code in order to yield the maximum maintainability value. In this paper, a quantitative evaluation method has been proposed to improve the maintainability value by identifying the most optimum refactoring dependencies in advance with the help of various meta-heuristic algorithms, including A*, AO*, Hill-Climbing and Greedy approaches. A comparison has been done between the maintainability values of the software used, before and after applying the proposed methodology. The results of this study show that the Greedy algorithm is the most promising algorithm amongst all the algorithms in determining the most optimum refactoring sequence resulting in 18.56% and 9.90% improvements in the maintainability values of jTDS and ArtOfIllusion projects, respectively. Further, this study would be beneficial for the software maintenance team as refactoring sequences will be available beforehand, thereby helping the team in maintaining the software with much ease to enhance the maintainability of the software. The proposed methodology will help the maintenance team to focus on a limited portion of the software due to prioritization of the classes, in turn helping them in completing their work within the budget and time constraints.


Author(s):  
Nazmul Siddique ◽  
Hojjat Adeli

In the past three decades nature-inspired and meta-heuristic algorithms have dominated the literature in the broad areas of search and optimization. Harmony search algorithm (HSA) is a music-inspired population-based meta-heuristic search and optimization algorithm. The concept behind the algorithm is to find a perfect state of harmony determined by aesthetic estimation. This paper starts with an overview of the harmonic phenomenon in music and music improvisation used by musicians and how it is applied to the optimization problem. The concept of harmony memory and its mathematical implementation are introduced. A review of HSA and its variants is presented. Guidelines from the literature on the choice of parameters used in HSA for effective solution of optimization problems are summarized.


Sign in / Sign up

Export Citation Format

Share Document