scholarly journals A Chaotic Disturbance Wolf Pack Algorithm for Solving Ultrahigh-Dimensional Complex Functions

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Qiming Zhu ◽  
Husheng Wu ◽  
Na Li ◽  
Jinqiang Hu

The optimization of high-dimensional functions is an important problem in both science and engineering. Wolf pack algorithm is a technique often used for computing the global optimum of a multivariable function. In this paper, we develop a new wolf pack algorithm that can accurately compute the optimal value of a high-dimensional function. First, chaotic opposite initialization is designed to improve the quality of initial solution. Second, the disturbance factor is added in the scouting process to enhance the searching ability of wolves, and an adaptive step length is designed to enhance the global searching ability to prevent wolves from falling into the local optimum effectively. A set of standard test functions are selected to test the performance of the proposed algorithm, and the test results are compared with other algorithms. The high-dimensional and ultrahigh-dimensional functions (500 and 1000) are tested. The experimental results show that the proposed algorithm features in good global convergence, high accuracy calculation, strong robustness, and excellent performance in high-dimensional functions.

2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Jun Qin ◽  
ChuTing Wang ◽  
GuiHe Qin

Multilevel thresholding is to find the thresholds to segment the image with grey levels. Usually, the thresholds are so determined that some indicator functions of the segmented image are optimized. To improve the computational efficiency, we presented an optimization method for multilevel thresholding. First, the solution space is divided into subspaces. Second, the subspaces are searched to obtain their current local optimal value. Third, the subspaces that are of worse current optimal value are eliminated. Then, the next round of elimination is exerted in the remainder. The elimination is repeated until only one subspace is left and its optimal value is taken as the global optimum. In principle, any random search algorithm can be used to find the local optimum in a subspace block because it is a strategy to enhance the searching efficiency through eliminating hopeless regions as early as possible, rather than to improve the searching algorithm itself. To verify its performance, taking PSO (Particle swarm optimization) as the basic searching algorithm of subspaces, the presented method is applied to Otsu’s and Kapur’s multilevel thresholding of four different kinds of digital images. The presented method is compared with PSO, and it behaves better in efficiency.


Author(s):  
Korawit Orkphol ◽  
Wu Yang

Microblogging is a type of blog used by people to express their opinions, attitudes, and feelings toward entities with a short message and this message is easily shared through the network of connected people. Knowing their sentiments would be beneficial for decision-making, planning, visualization, and so on. Grouping similar microblogging messages can convey some meaningful sentiments toward an entity. This task can be accomplished by using a simple and fast clustering algorithm, [Formula: see text]-means. As the microblogging messages are short and noisy they cause high sparseness and high-dimensional dataset. To overcome this problem, term frequency–inverse document frequency (tf–idf) technique is employed for selecting the relevant features, and singular value decomposition (SVD) technique is employed for reducing the high-dimensional dataset while still retaining the most relevant features. These two techniques adjust dataset to improve the [Formula: see text]-means efficiently. Another problem comes from [Formula: see text]-means itself. [Formula: see text]-means result relies on the initial state of centroids, the random initial state of centroids usually causes convergence to a local optimum. To find a global optimum, artificial bee colony (ABC), a novel swarm intelligence algorithm, is employed to find the best initial state of centroids. Silhouette analysis technique is also used to find optimal [Formula: see text]. After clustering into [Formula: see text] groups, each group will be scored by SentiWordNet and we analyzed the sentiment polarities of each group. Our approach shows that combining various techniques (i.e., tf–idf, SVD, and ABC) can significantly improve [Formula: see text]-means result (41% from normal [Formula: see text]-means).


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 53
Author(s):  
Qibing Jin ◽  
Nan Lin ◽  
Yuming Zhang

K-Means Clustering is a popular technique in data analysis and data mining. To remedy the defects of relying on the initialization and converging towards the local minimum in the K-Means Clustering (KMC) algorithm, a chaotic adaptive artificial bee colony algorithm (CAABC) clustering algorithm is presented to optimally partition objects into K clusters in this study. This algorithm adopts the max–min distance product method for initialization. In addition, a new fitness function is adapted to the KMC algorithm. This paper also reports that the iteration abides by the adaptive search strategy, and Fuch chaotic disturbance is added to avoid converging on local optimum. The step length decreases linearly during the iteration. In order to overcome the shortcomings of the classic ABC algorithm, the simulated annealing criterion is introduced to the CAABC. Finally, the confluent algorithm is compared with other stochastic heuristic algorithms on the 20 standard test functions and 11 datasets. The results demonstrate that improvements in CAABA-K-means have an advantage on speed and accuracy of convergence over some conventional algorithms for solving clustering problems.


Author(s):  
Peng Qiong ◽  
Yifan Liao ◽  
Peng Hao ◽  
Xiaonia He ◽  
Chen Hui

When the basic glowworm swarm optimization (GSO) algorithm optimizes the multi-peak function, the solution accuracy is not high, the later convergence is slow. To solve these problems, the fluorescent factor is introduced to adaptively adjust the step length of the firefly, an adaptive step length firefly optimization algorithm is proposed, this algorithm is an improved self-adaptive step glowworm swarm optimization (ASGSO). In this algorithm, the behavior of glowworms are developed, the step size is dynamically adjusted by the fluorescent factor, the algorithm avoids falling into a local optimum and improves the optimization speed and accuracy. The simulation results show that the improved ASGSO can search for global optimization more quickly and precisely.


2012 ◽  
Vol 538-541 ◽  
pp. 2594-2597
Author(s):  
Ying Xu ◽  
Hon Gan Chen

The artificial fish swarm algorithm, it may be trapped in local optimum in the later evolution period and its search accuracy is dependent on step length which is hard to keep balance between rapidity and accuracy. Aimed at the defects of AFSA, a novel global artificial fish swarm algorithm is proposed in this paper, in which normal chaotic search on earlier stage is modified , and a differential evolution with improved chaos search was proposed to lead artificial fish into global optimum value. The experimental results show that the proposed algorithm is not only superior to traditional one but also can make the result greater.


2016 ◽  
Vol 25 (4) ◽  
pp. 567-593 ◽  
Author(s):  
Kang Huang ◽  
Yongquan Zhou ◽  
Xiuli Wu ◽  
Qifang Luo

AbstractIn this paper, a cuckoo search (CS) algorithm using elite opposition-based strategy is proposed. The opposite solution of the elite individual in the population is generated by an opposition-based strategy in the proposed algorithm and form an opposite search space by constructing the opposite population that locates inside the dynamic search boundaries, then, the search space of the algorithm is guided to approximate the space in which the global optimum is included by simultaneously evaluating the current population and the opposite one. This approach is helpful to obtain a tradeoff between the exploration and exploitation ability of CS. In order to enhance the local searching ability, local neighborhood search strategy is also applied in this proposed algorithm. The experiments were conducted on 14 classic benchmark functions and 28 more complex functions from the IEEE CEC’2013 competition, and the experimental results, compared with five other meta-heuristic algorithms and four improved cuckoo search algorithms, show that the proposed algorithm is much better than the compared ones at not only the accuracy of solutions but also for the convergence speed.


2021 ◽  
Vol 16 (2) ◽  
pp. 1-34
Author(s):  
Rediet Abebe ◽  
T.-H. HUBERT Chan ◽  
Jon Kleinberg ◽  
Zhibin Liang ◽  
David Parkes ◽  
...  

A long line of work in social psychology has studied variations in people’s susceptibility to persuasion—the extent to which they are willing to modify their opinions on a topic. This body of literature suggests an interesting perspective on theoretical models of opinion formation by interacting parties in a network: in addition to considering interventions that directly modify people’s intrinsic opinions, it is also natural to consider interventions that modify people’s susceptibility to persuasion. In this work, motivated by this fact, we propose an influence optimization problem. Specifically, we adopt a popular model for social opinion dynamics, where each agent has some fixed innate opinion, and a resistance that measures the importance it places on its innate opinion; agents influence one another’s opinions through an iterative process. Under certain conditions, this iterative process converges to some equilibrium opinion vector. For the unbudgeted variant of the problem, the goal is to modify the resistance of any number of agents (within some given range) such that the sum of the equilibrium opinions is minimized; for the budgeted variant, in addition the algorithm is given upfront a restriction on the number of agents whose resistance may be modified. We prove that the objective function is in general non-convex. Hence, formulating the problem as a convex program as in an early version of this work (Abebe et al., KDD’18) might have potential correctness issues. We instead analyze the structure of the objective function, and show that any local optimum is also a global optimum, which is somehow surprising as the objective function might not be convex. Furthermore, we combine the iterative process and the local search paradigm to design very efficient algorithms that can solve the unbudgeted variant of the problem optimally on large-scale graphs containing millions of nodes. Finally, we propose and evaluate experimentally a family of heuristics for the budgeted variant of the problem.


2013 ◽  
Vol 11 (1) ◽  
pp. 293-308 ◽  
Author(s):  
Somayeh Karimi ◽  
Navid Mostoufi ◽  
Rahmat Sotudeh-Gharebagh

Abstract Modeling and optimization of the process of continuous catalytic reforming (CCR) of naphtha was investigated. The process model is based on a network of four main reactions which was proved to be quite effective in terms of industrial application. Temperatures of the inlet of four reactors were selected as the decision variables. The honey-bee mating optimization (HBMO) and the genetic algorithm (GA) were applied to solve the optimization problem and the results of these two methods were compared. The profit was considered as the objective function which was subject to maximization. Optimization of the CCR moving bed reactors to reach maximum profit was carried out by the HBMO algorithm and the inlet temperature reactors were considered as decision variables. The optimization results showed that an increase of 3.01% in the profit can be reached based on the results of the HBMO algorithm. Comparison of the performance of optimization by the HBMO and the GA for the naphtha reforming model showed that the HBMO is an effective and rapid converging technique which can reach a better optimum results than the GA. The results showed that the HBMO has a better performance than the GA in finding the global optimum with fewer number of objective function evaluations. Also, it was shown that the HBMO is less likely to get stuck in a local optimum.


2020 ◽  
Vol 6 (8) ◽  
pp. 1411-1427 ◽  
Author(s):  
Yan-Cang Li ◽  
Pei-Dong Xu

In order to find a more effective method in structural optimization, an improved wolf pack optimization algorithm was proposed. In the traditional wolf pack algorithm, the problem of falling into local optimum and low precision often occurs. Therefore, the adaptive step size search and Levy's flight strategy theory were employed to overcome the premature flaw of the basic wolf pack algorithm. Firstly, the reasonable change of the adaptive step size improved the fineness of the search and effectively accelerated the convergence speed. Secondly, the search strategy of Levy's flight was adopted to expand the search scope and improved the global search ability of the algorithm. At last, to verify the performance of improved wolf pack algorithm, it was tested through simulation experiments and actual cases, and compared with other algorithms. Experiments show that the improved wolf pack algorithm has better global optimization ability. This study provides a more effective solution to structural optimization problems.


Author(s):  
K. Kamil ◽  
K.H Chong ◽  
H. Hashim ◽  
S.A. Shaaya

<p>Genetic algorithm is a well-known metaheuristic method to solve optimization problem mimic the natural process of cell reproduction. Having great advantages on solving optimization problem makes this method popular among researchers to improve the performance of simple Genetic Algorithm and apply it in many areas. However, Genetic Algorithm has its own weakness of less diversity which cause premature convergence where the potential answer trapped in its local optimum.  This paper proposed a method Multiple Mitosis Genetic Algorithm to improve the performance of simple Genetic Algorithm to promote high diversity of high-quality individuals by having 3 different steps which are set multiplying factor before the crossover process, conduct multiple mitosis crossover and introduce mini loop in each generation. Results shows that the percentage of great quality individuals improve until 90 percent of total population to find the global optimum.</p>


Sign in / Sign up

Export Citation Format

Share Document