A Novel Bio-Inspired Algorithm Based on Social Spiders for Improving Performance and Efficiency of Data Clustering

2018 ◽  
Vol 29 (1) ◽  
pp. 311-326 ◽  
Author(s):  
Ravi Chandran Thalamala ◽  
A. Venkata Swamy Reddy ◽  
B. Janet

Abstract Since the last decade, the collective intelligent behavior of groups of animals, birds or insects have attracted the attention of researchers. Swarm intelligence is the branch of artificial intelligence that deals with the implementation of intelligent systems by taking inspiration from the collective behavior of social insects and other societies of animals. Many meta-heuristic algorithms based on aggregative conduct of swarms through complex interactions with no supervision have been used to solve complex optimization problems. Data clustering organizes data into groups called clusters, such that each cluster has similar data. It also produces clusters that could be disjoint. Accuracy and efficiency are the important measures in data clustering. Several recent studies describe bio-inspired systems as information processing systems capable of some cognitive ability. However, existing popular bio-inspired algorithms for data clustering ignored good balance between exploration and exploitation for producing better clustering results. In this article, we propose a bio-inspired algorithm, namely social spider optimization (SSO), for clustering that maintains a good balance between exploration and exploitation using female and male spiders, respectively. We compare results of the proposed algorithm SSO with K means and other nature-inspired algorithms such as particle swarm optimization (PSO), ant colony optimization (ACO) and improved bee colony optimization (IBCO). We find it to be more robust as it produces better clustering results. Although SSO solves the problem of getting stuck in the local optimum, it needs to be modified for locating the best solution in the proximity of the generated global solution. Hence, we hybridize SSO with K means, which produces good results in local searches. We compare proposed hybrid algorithms SSO+K means (SSOKC), integrated SSOKC (ISSOKC), and interleaved SSOKC (ILSSOKC) with K means+PSO (KPSO), K means+genetic algorithm (KGA), K means+artificial bee colony (KABC) and interleaved K means+IBCO (IKIBCO) and find better clustering results. We use sum of intra-cluster distances (SICD), average cosine similarity, accuracy and inter-cluster distance to measure and validate the performance and efficiency of the proposed clustering techniques.

2019 ◽  
Vol 2 (3) ◽  
pp. 508-517
Author(s):  
FerdaNur Arıcı ◽  
Ersin Kaya

Optimization is a process to search the most suitable solution for a problem within an acceptable time interval. The algorithms that solve the optimization problems are called as optimization algorithms. In the literature, there are many optimization algorithms with different characteristics. The optimization algorithms can exhibit different behaviors depending on the size, characteristics and complexity of the optimization problem. In this study, six well-known population based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA and particle swarm optimization - PSO) were used. These six algorithms were performed on the CEC’17 test functions. According to the experimental results, the algorithms were compared and performances of the algorithms were evaluated.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Li Mao ◽  
Yu Mao ◽  
Changxi Zhou ◽  
Chaofeng Li ◽  
Xiao Wei ◽  
...  

Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Amnat Panniem ◽  
Pikul Puphasuk

Artificial Bee Colony (ABC) algorithm is one of the efficient nature-inspired optimization algorithms for solving continuous problems. It has no sensitive control parameters and has been shown to be competitive with other well-known algorithms. However, the slow convergence, premature convergence, and being trapped within the local solutions may occur during the search. In this paper, we propose a new Modified Artificial Bee Colony (MABC) algorithm to overcome these problems. All phases of ABC are determined for improving the exploration and exploitation processes. We use a new search equation in employed bee phase, increase the probabilities for onlooker bees to find better positions, and replace some worst positions by the new ones in onlooker bee phase. Moreover, we use the Firefly algorithm strategy to generate a new position replacing an unupdated position in scout bee phase. Its performance is tested on selected benchmark functions. Experimental results show that MABC is more effective than ABC and some other modifications of ABC.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1637
Author(s):  
Mohammad H. Nadimi-Shahraki ◽  
Ali Fatahi ◽  
Hoda Zamani ◽  
Seyedali Mirjalili ◽  
Laith Abualigah

Moth-flame optimization (MFO) algorithm inspired by the transverse orientation of moths toward the light source is an effective approach to solve global optimization problems. However, the MFO algorithm suffers from issues such as premature convergence, low population diversity, local optima entrapment, and imbalance between exploration and exploitation. In this study, therefore, an improved moth-flame optimization (I-MFO) algorithm is proposed to cope with canonical MFO’s issues by locating trapped moths in local optimum via defining memory for each moth. The trapped moths tend to escape from the local optima by taking advantage of the adapted wandering around search (AWAS) strategy. The efficiency of the proposed I-MFO is evaluated by CEC 2018 benchmark functions and compared against other well-known metaheuristic algorithms. Moreover, the obtained results are statistically analyzed by the Friedman test on 30, 50, and 100 dimensions. Finally, the ability of the I-MFO algorithm to find the best optimal solutions for mechanical engineering problems is evaluated with three problems from the latest test-suite CEC 2020. The experimental and statistical results demonstrate that the proposed I-MFO is significantly superior to the contender algorithms and it successfully upgrades the shortcomings of the canonical MFO.


2019 ◽  
Vol 2019 ◽  
pp. 1-20 ◽  
Author(s):  
Alkin Yurtkuran

Electromagnetic field optimization (EFO) is a relatively new physics-inspired population-based metaheuristic algorithm, which simulates the behavior of electromagnets with different polarities and takes advantage of a nature-inspired ratio, known as the golden ratio. In EFO, the population consists of electromagnetic particles made of electromagnets corresponding to variables of an optimization problem and is divided into three fields: positive, negative, and neutral. In each iteration, a new electromagnetic particle is generated based on the attraction-repulsion forces among these electromagnetic fields, where the repulsion force helps particle to avoid the local optimal point, and the attraction force leads to find global optimal. This paper introduces an improved version of the EFO called improved electromagnetic field optimization (iEFO). Distinct from the EFO, the iEFO has two novel modifications: new solution generation function for the electromagnets and adaptive control of algorithmic parameters. In addition to these major improvements, the boundary control and randomization procedures for the newly generated electromagnets are modified. In the computational studies, the performance of the proposed iEFO is tested against original EFO, existing physics-inspired algorithms, and state-of-the-art meta-heuristic algorithms as artificial bee colony algorithm, particle swarm optimization, and differential evolution. Obtained results are verified with statistical testing, and results reveal that proposed iEFO outperforms the EFO and other considered competitor algorithms by providing better results.


2020 ◽  
Vol 8 (1) ◽  
pp. 86-101 ◽  
Author(s):  
Vivi Nur Wijayaningrum ◽  
Novi Nur Putriwijaya

Metaheuristic algorithms are often trapped in local optimum solutions when searching for solutions. This problem often occurs in optimization cases involving high dimensions such as data clustering. Imbalance of the exploration and exploitation process is the cause of this condition because search agents are not able to reach the best solution in the search space. In this study, the problem is overcome by modifying the solution update mechanism so that a search agent not only follows another randomly chosen search agent, but also has the opportunity to follow the best search agent. In addition, the balance of exploration and exploitation is also enhanced by the mechanism of updating the awareness probability of each search agent in accordance with their respective abilities in searching for solutions. The improve mechanism makes the proposed algorithm obtain pretty good solutions with smaller computational time compared to Genetic Algorithm and Particle Swarm Optimization. In large datasets, it is proven that the proposed algorithm is able to provide the best solution among the other algorithms.


2021 ◽  
Vol 20 (Number 2) ◽  
pp. 213-248
Author(s):  
Narender Kumar ◽  
Dharmender Kumar

Grey wolf optimization (GWO) is a recent and popular swarm-based metaheuristic approach. It has been used in numerous fields such as numerical optimization, engineering problems, and machine learning. The different variants of GWO have been developed in the last 5 years for solving optimization problems in diverse fields. Like other metaheuristic algorithms, GWO also suffers from local optima and slow convergence problems, resulted in degraded performance. An adequate equilibrium among exploration and exploitation is a key factor to the success of meta-heuristic algorithms especially for optimization task. In this paper, a new variant of GWO, called inertia motivated GWO (IMGWO) is proposed. The aim of IMGWO is to establish better balance between exploration and exploitation. Traditionally, artificial neural network (ANN) with backpropagation (BP) depends on initial values and in turn, attains poor convergence. The metaheuristic approaches are better alternative instead of BP. The proposed IMGWO is used to train the ANN to prove its competency in terms of prediction. The proposed IMGWO-ANN is used for medical diagnosis task. Some benchmark medical datasets including heart disease, breast cancer, hepatitis, and parkinson's diseases are used for assessing the performance of IMGWO-ANN. The performance measures are described in terms of mean squared errors (MSEs), classification accuracies, sensitivities, specificities, the area under the curve (AUC), and receiver operating characteristic (ROC) curve. It is found that IMGWO outperforms than three popular metaheuristic approaches including GWO, genetic algorithm (GA), and particle swarm optimization (PSO). Results confirmed the potency of IMGWO as a viable learning technique for an ANN.


2021 ◽  
Vol 3 (1) ◽  
pp. 36-58
Author(s):  
Mustafa Danaci ◽  
Fehim Koylu ◽  
Zaid Ali Al-Sumaidaee

A modified versions of metaheuristic algorithms are presented to compare their performance in identifying the structural dynamic systems. Genetic algorithm, biogeography based optimization algorithm, ant colony optimization algorithm and artificial bee colony algorithm are heuristic algorithms that have robustness and ease of implementation with simple structure. Different algorithms were selected some from evolution algorithms and other from swarm algorithms   to boost the equilibrium of global searches and local searches, to compare the performance and investigate the applicability of proposed algorithms to system identification; three cases are suggested under different conditions concerning data availability, different noise rate and previous familiarity of parameters. Simulation results show these proposed algorithms produce excellent parameter estimation, even with little measurements and a high noise rate.


2021 ◽  
Vol 18 (6) ◽  
pp. 7076-7109
Author(s):  
Shuang Wang ◽  
◽  
Heming Jia ◽  
Qingxin Liu ◽  
Rong Zheng ◽  
...  

<abstract> <p>This paper introduces an improved hybrid Aquila Optimizer (AO) and Harris Hawks Optimization (HHO) algorithm, namely IHAOHHO, to enhance the searching performance for global optimization problems. In the IHAOHHO, valuable exploration and exploitation capabilities of AO and HHO are retained firstly, and then representative-based hunting (RH) and opposition-based learning (OBL) strategies are added in the exploration and exploitation phases to effectively improve the diversity of search space and local optima avoidance capability of the algorithm, respectively. To verify the optimization performance and the practicability, the proposed algorithm is comprehensively analyzed on standard and CEC2017 benchmark functions and three engineering design problems. The experimental results show that the proposed IHAOHHO has more superior global search performance and faster convergence speed compared to the basic AO and HHO and selected state-of-the-art meta-heuristic algorithms.</p> </abstract>


2010 ◽  
Vol 4 (1) ◽  
pp. 37-48
Author(s):  
Mozammel H.A. Khan

Quantum-Inspired Evolutionary Algorithm (QEA) has been shown to be better performing than classical Genetic Algorithm based evolutionary techniques for combinatorial optimization problems like 0/1 knapsack problem. QEA uses quantum computing-inspired representation of solution called Q-bit individual consisting of Q-bits. The probability amplitudes of the Q-bits are changed by application of Q-gate operator, which is classical analogous of quantum rotation operator. The Q-gate operator is the only variation operator used in QEA, which along with some problem specific heuristic provides exploitation of the properties of the best solutions. In this paper, we analyzed the characteristics of the QEA for 0/1 knapsack problem and showed that a probability in the range 0.3 to 0.4 for the application of the Q-gate variation operator has the greatest likelihood of making a good balance between exploration and exploitation. Experimental results agree with the analytical finding.


Sign in / Sign up

Export Citation Format

Share Document