global optimum solution
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 1)

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 180
Author(s):  
Kashif Habib ◽  
Xinquan Lai ◽  
Abdul Wadood ◽  
Shahbaz Khan ◽  
Yuheng Wang ◽  
...  

In the electrical power system, the coordination of directional overcurrent protection relays (DOPR) plays a preeminent role in protecting the electrical power system with the help of primary and back up protection to keep the system vigorous and to avoid unnecessary interruption. The coordination between these relays should be pursued at optimal value to minimize the total operating time of all main relays. The coordination of directional overcurrent relay is a highly constrained optimization problem. The DOPR problem has been solved by using a hybridized version of particle swarm optimization (HPSO). The hybridization is achieved by introducing simulated annealing (SA) in original PSO to avoid being trapped in local optima and successfully searching for a global optimum solution. The HPSO has been successfully applied to five case studies. Furthermore, the obtained results outperform the other traditional and state of the art techniques in terms of minimizing the total operating of DOPR and convergence characteristics, and require less computational time to achieve the global optimum solution.


2021 ◽  
Vol 36 (1) ◽  
pp. 35-40
Author(s):  
Shanshan Tu ◽  
Obaid Rehman ◽  
Sadaqat Rehman ◽  
Shafi Khan ◽  
Muhammad Waqas ◽  
...  

Particle swarm optimizer is one of the searched based stochastic technique that has a weakness of being trapped into local optima. Thus, to tradeoff between the local and global searches and to avoid premature convergence in PSO, a new dynamic quantum-based particle swarm optimization (DQPSO) method is proposed in this work. In the proposed method a beta probability distribution technique is used to mutate the particle with the global best position of the swarm. The proposed method can ensure the particles to escape from local optima and will achieve the global optimum solution more easily. Also, to enhance the global searching capability of the proposed method, a dynamic updated formula is proposed that will keep a good balance between the local and global searches. To evaluate the merit and efficiency of the proposed DQPSO method, it has been tested on some well-known mathematical test functions and a standard benchmark problem known as Loney’s solenoid design.


Author(s):  
Foo Fong Yeng ◽  
Soo Kum Yoke ◽  
Azrina Suhaimi

Genetic Algorithm is an algorithm imitating the natural evolution process in solving optimization problems. All feasible (candidate) solutions would be encoded into chromosomes and undergo the execution of genetic operators in evolution. The evolution itself is a process searching for optimum solution. The searching would stop when a stopping criterion is met. Then, the fittest chromosome of last generation is declared as the optimum solution. However, this optimum solution might be a local optimum or a global optimum solution. Hence, an appropriate stopping criterion is important such that the search is not ended before a global optimum solution is found. In this paper, saturation of population fitness is proposed as a stopping criterion for ending the search. The proposed stopping criteria was compared with conventional stopping criterion, fittest chromosomes repetition, under various parameters setting. The results show that the performance of proposed stopping criterion is superior as compared to the conventional stopping criterion.


2019 ◽  
Vol 9 (1) ◽  
pp. 176 ◽  
Author(s):  
Catalina-Lucia Cocianu ◽  
Alexandru Stan

The work reported in this paper aims at the development of evolutionary algorithms to register images for signature recognition purposes. We propose and develop several registration methods in order to obtain accurate and fast algorithms. First, we introduce two variants of the firefly method that proved to have excellent accuracy and fair run times. In order to speed up the computation, we propose two variants of Accelerated Particle Swarm Optimization (APSO) method. The resulted algorithms are significantly faster than the firefly-based ones, but the recognition rates are a little bit lower. In order to find a trade-off between the recognition rate and the computational complexity of the algorithms, we developed a hybrid method that combines the ability of auto-adaptive Evolution Strategies (ES) search to discover a global optimum solution with the strong quick convergence ability of APSO. The accuracy and the efficiency of the resulted algorithms have been experimentally proved by conducting a long series of tests on various pairs of signature images. The comparative analysis concerning the quality of the proposed methods together with conclusions and suggestions for further developments are provided in the final part of the paper.


Author(s):  
Craig M. Shakarji ◽  
Vijay Srinivasan

This paper addresses some important theoretical issues for constrained least-squares fitting of planes and parallel planes to a set of points. In particular, it addresses the convexity of the objective function and the combinatorial characterizations of the optimality conditions. These problems arise in establishing planar datums and systems of planar datums in digital manufacturing. It is shown that even when the set of points (i.e., the input points) are in general position, (1) a primary planar datum can contact 1, 2, or 3 input points, (2) a secondary planar datum can contact 1 or 2 input points, and (3) two parallel planes can each contact 1, 2, or 3 input points, but there are some constraints to these combinatorial counts. In addition, it is shown that the objective functions are convex over the domains of interest. The optimality conditions and convexity of objective functions proved in this paper will enable one to verify whether a given solution is a feasible solution, and to design efficient algorithms to find the global optimum solution.


Author(s):  
Lakshmi K. ◽  
Karthikeyani Visalakshi N. ◽  
Shanthi S. ◽  
Parvathavarthini S.

Data mining techniques are useful to discover the interesting knowledge from the large amount of data objects. Clustering is one of the data mining techniques for knowledge discovery and it is the unsupervised learning method and it analyses the data objects without knowing class labels. The k-prototype is the most widely-used partitional clustering algorithm for clustering the data objects with mixed numeric and categorical type of data. This algorithm provides the local optimum solution due to its selection of initial prototypes randomly. Recently, there are number of optimization algorithms are introduced to obtain the global optimum solution. The Crow Search algorithm is one the recently developed population based meta-heuristic optimization algorithm. This algorithm is based on the intelligent behavior of the crows. In this paper, k-prototype clustering algorithm is integrated with the Crow Search optimization algorithm to produce the global optimum solution.


Author(s):  
Craig M. Shakarji ◽  
Vijay Srinivasan

This paper addresses some important theoretical issues for constrained least-squares fitting of planes and parallel planes to a set of input points. In particular, it addresses the convexity of the objective function and the combinatorial characterizations of the optimality conditions. These problems arise in establishing planar datums and systems of planar datums in digital manufacturing. It is shown that even when the input points are in general position: (1) a primary planar datum can contact 1, 2, or 3 input points, (2) a secondary planar datum can contact 1 or 2 input points, and (3) two parallel planes can each contact 1, 2, or 3 input points, but there are some constraints to these combinatorial counts. In addition, it is shown that the objective functions are convex over the domains of interest. The optimality conditions and convexity of objective functions proved in this paper will enable one to verify whether a given solution is a feasible solution, and to design efficient algorithms to find the global optimum solution.


2016 ◽  
pp. 450-475
Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


2015 ◽  
Vol 1115 ◽  
pp. 573-577
Author(s):  
Azmi Hassan ◽  
Muhammad Ridwan Andi Purnomo ◽  
Putri Dwi Annisa

This paper presents a comparative study of clustering using Artificial Intelligence (AI) techniques. There are 3 methods to be compared, two methods are pure method, called Self Organising Map (SOM) which is branch of Artificial Neural Network (ANN) and Genetic Algorithm (GA), while one method is hybrid between GA and SOM, called GA-based SOM. SOM is one of the most popular method for cluster analysis. SOM will group objects based on the nearest distance between object and updateable cluster centres. However, there are disadvantages of SOM. Solution quality is depend on initial cluster centres that are generated randomly and cluster centres update algorithm is just based on a delta value without considering the searching direction. Basically, clustering case could be modelled as optimisation case. The objective function is to minimise total distance of all data to their cluster centre. Hence, GA has potentiality to be applied for clustering. Advantage of GA is it has multi searching points in finding the solution and stochastic movement from a phase to the next phase. Therefore, possibility of GA to find global optimum solution will be higher. However, there is still some possibility of GA just find near-optimum solution. The advantage of SOM is the smooth iterative procedure to improve existing cluster centres. Hybridisation of GA and SOM believed could provide better solution. In this study, there are 2 data sets used to test the performance of the three techniques. The study shows that when the solution domain is very wide then SOM and GA-based SOM perform better compared to GA while when the solution domain is not very wide then GA performs better.


Sign in / Sign up

Export Citation Format

Share Document