scholarly journals New Evolutionary-Based Techniques for Image Registration

2019 ◽  
Vol 9 (1) ◽  
pp. 176 ◽  
Author(s):  
Catalina-Lucia Cocianu ◽  
Alexandru Stan

The work reported in this paper aims at the development of evolutionary algorithms to register images for signature recognition purposes. We propose and develop several registration methods in order to obtain accurate and fast algorithms. First, we introduce two variants of the firefly method that proved to have excellent accuracy and fair run times. In order to speed up the computation, we propose two variants of Accelerated Particle Swarm Optimization (APSO) method. The resulted algorithms are significantly faster than the firefly-based ones, but the recognition rates are a little bit lower. In order to find a trade-off between the recognition rate and the computational complexity of the algorithms, we developed a hybrid method that combines the ability of auto-adaptive Evolution Strategies (ES) search to discover a global optimum solution with the strong quick convergence ability of APSO. The accuracy and the efficiency of the resulted algorithms have been experimentally proved by conducting a long series of tests on various pairs of signature images. The comparative analysis concerning the quality of the proposed methods together with conclusions and suggestions for further developments are provided in the final part of the paper.

2015 ◽  
Vol 713-715 ◽  
pp. 1491-1494 ◽  
Author(s):  
Zhi Qiang Gao ◽  
Li Xia Liu ◽  
Wei Wei Kong ◽  
Xiao Hong Wang

A novel composite framework of Cuckoo Search (CS) and Particle Swarm Optimization (PSO) algorithm called CS-PSO is proposed in this paper. In CS-PSO, initialization is substituted by chaotic system, and then Cuckoo shares optimums in the global best solutions pool with particles in PSO to improve parallel cooperation and social interaction. Furthermore, Cloud Model, famous for its outstanding characteristics of the process of transforming qualitative concepts to a set of quantitative numerical values, is adopted to exploit the surrounding of the local solutions obtained from the global best solution pool. Benchmark test results show that, CS-PSO can converge to the global optimum solution rapidly and accurately, compared with other algorithms, especially in high dimensional problems.


2021 ◽  
Vol 36 (1) ◽  
pp. 35-40
Author(s):  
Shanshan Tu ◽  
Obaid Rehman ◽  
Sadaqat Rehman ◽  
Shafi Khan ◽  
Muhammad Waqas ◽  
...  

Particle swarm optimizer is one of the searched based stochastic technique that has a weakness of being trapped into local optima. Thus, to tradeoff between the local and global searches and to avoid premature convergence in PSO, a new dynamic quantum-based particle swarm optimization (DQPSO) method is proposed in this work. In the proposed method a beta probability distribution technique is used to mutate the particle with the global best position of the swarm. The proposed method can ensure the particles to escape from local optima and will achieve the global optimum solution more easily. Also, to enhance the global searching capability of the proposed method, a dynamic updated formula is proposed that will keep a good balance between the local and global searches. To evaluate the merit and efficiency of the proposed DQPSO method, it has been tested on some well-known mathematical test functions and a standard benchmark problem known as Loney’s solenoid design.


2014 ◽  
Vol 543-547 ◽  
pp. 1822-1826 ◽  
Author(s):  
Yi Ge Xue ◽  
Hui Wen Deng

The cuckoo search (CS) algorithm is a very efficient swarm optimization algorithm. Based on CS, a cuckoo search algorithm based on dynamic grouping to adjust flight scale (DGCS) is proposed: All cuckoos are divided into three groups according to the fitness of the individual and the average fitness of the population, then different flight scale is adopted dynamically for each group. Simulation experiments show that the DGCS can quickly converge to the global optimum solution, and has better optimization performance.


2016 ◽  
pp. 450-475
Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


Author(s):  
Masao Arakawa ◽  
Tomoyuki Miyashita ◽  
Hiroshi Ishikawa

In some cases of developing a new product, response surface of an objective function is not always single peaked function, and it is often multi-peaked function. In that case, designers would like to have not oniy global optimum solution but also as many local optimum solutions and/or quasi-optimum solutions as possible, so that he or she can select one out of them considering the other conditions that are not taken into account priori to optimization. Although this information is quite useful, it is not that easy to obtain with a single trial of optimization. In this study, we will propose a screening of fitness function in genetic algorithms (GA). Which change fitness function during searching. Therefore, GA needs to have higher flexibility in searching. Genetic Range Genetic Algorithms include a number of searching range in a single generation. Just like there are a number of species in wild life. Therefore, it can arrange to have both global searching range and also local searching range with different fitness function. In this paper, we demonstrate the effectiveness of the proposed method through a simple benchmark test problems.


2013 ◽  
Vol 421 ◽  
pp. 507-511 ◽  
Author(s):  
Nurezayana Zainal ◽  
Azlan Mohd Zain ◽  
Nor Haizan Mohamed Radzi ◽  
Amirmudin Udin

Glowworm Swarm Optimization (GSO) algorithm is a derivative-free, meta-heuristic algorithm and mimicking the glow behavior of glowworms which can efficiently capture all the maximum multimodal function. Nevertheless, there are several weaknesses to locate the global optimum solution for instance low calculation accuracy, simply falling into the local optimum, convergence rate of success and slow speed to converge. This paper reviews the exposition of a new method of swarm intelligence in solving optimization problems using GSO. Recently the GSO algorithm was used simultaneously to find solutions of multimodal function optimization problem in various fields in today industry such as science, engineering, network and robotic. From the paper review, we could conclude that the basic GSO algorithm, GSO with modification or improvement and GSO with hybridization are considered by previous researchers in order to solve the optimization problem. However, based on the literature review, many researchers applied basic GSO algorithm in their research rather than others.


2014 ◽  
Vol 989-994 ◽  
pp. 2301-2305 ◽  
Author(s):  
Zi Chao Yan ◽  
Yang Shen Luo

The passage aims at solving the problems resulted from the optimized process of Particle Swarm Optimization (PSO), which might reduce the population diversity, cause the algorithm to convergence too early, etc. A whole new mutable simulated annealing particle swarm optimization is proposed based on the combine of the simulated annealing mechanism and mutation. This new algorithm substitutes the Metropolis criterion in the simulated annealing mechanism for mutagenic factors in the process of mutation, which both ensures the diversity of the particle swarm, and ameliorates the quality of the swarm, so that this algorithm would convergence to the global optimum. According to the result of simulated analysis, this hybrid algorithm maintains the simplicity of the particle swarm optimization, improves its capability of global optimization, and finally accelerates the convergence and enhances the precision of this algorithm.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xiang Yu ◽  
Yu Qiao

Comprehensive learning particle swarm optimization (CLPSO) and enhanced CLPSO (ECLPSO) are two literature metaheuristics for global optimization. ECLPSO significantly improves the exploitation and convergence performance of CLPSO by perturbation-based exploitation and adaptive learning probabilities. However, ECLPSO still cannot locate the global optimum or find a near-optimum solution for a number of problems. In this paper, we study further bettering the exploration performance of ECLPSO. We propose to assign an independent inertia weight and an independent acceleration coefficient corresponding to each dimension of the search space, as well as an independent learning probability for each particle on each dimension. Like ECLPSO, a normative interval bounded by the minimum and maximum personal best positions is determined with respect to each dimension in each generation. The dimensional independent maximum velocities, inertia weights, acceleration coefficients, and learning probabilities are proposed to be adaptively updated based on the dimensional normative intervals in order to facilitate exploration, exploitation, and convergence, particularly exploration. Our proposed metaheuristic, called adaptive CLPSO (ACLPSO), is evaluated on various benchmark functions. Experimental results demonstrate that the dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities help to significantly mend ECLPSO’s exploration performance, and ACLPSO is able to derive the global optimum or a near-optimum solution on all the benchmark functions for all the runs with parameters appropriately set.


Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


2006 ◽  
Vol 12 (1) ◽  
pp. 89-115 ◽  
Author(s):  
Hideaki Suzuki ◽  
Hidefumi Sawai ◽  
Wojciech Piaseczny

A chemical genetic algorithm (CGA) in which several types of molecules (information units) react with each other in a cell is proposed. Not only the information in DNA, but also smaller molecules responsible for the transcription and translation of DNA into amino acids, are adaptively changed during evolution, which optimizes the fundamental mapping from binary substrings in DNA (genotype) to real values for a parameter set (phenotype). Through the struggle between cells containing a DNA unit and small molecular units, the codes (DNA) and the interpreter (the small molecular units) coevolve, and a specific output function, from which a cell's fitness is evaluated, is optimized. To demonstrate the effectiveness of the CGA, it is applied to a set of variable-separable and variable-inseparable problems, and it is shown that the CGA can robustly solve a wide range of optimization problems regardless of their fitness characteristics. To ascertain the optimization of the genotypeto-phenotype mapping by the CGA, we also conduct analytical experiments for some problems while observing the basin size of a global optimum solution in the binary genotype space. The results show that the CGA effectively augments the basin size, makes it easier for evolution to find a path to the global optimum solution, and enhances the GA's evolvability during evolution.


Sign in / Sign up

Export Citation Format

Share Document