A new hybrid algorithm based on black hole optimization and bisecting k-means for cluster analysis

Author(s):  
Mohammad Eskandarzadehalamdary ◽  
Behrooz Masoumi ◽  
Omid Sojodishijani
2022 ◽  
Vol 13 (2) ◽  
pp. 237-254 ◽  
Author(s):  
Ömer Yılmaz ◽  
Adem Alpaslan Altun ◽  
Murat Köklü

Hybrid algorithms are widely used today to increase the performance of existing algorithms. In this paper, a new hybrid algorithm called IMVOSA that is based on multi-verse optimizer (MVO) and simulated annealing (SA) is used. In this model, a new method called the black hole selection (BHS) is proposed, in which exploration and exploitation can be increased. In the BHS method, the acceptance probability feature of the SA algorithm is used to increase exploitation by searching for the best regions found by the MVO algorithm. The proposed IMVOSA algorithm has been tested on 50 benchmark functions. The performance of IMVOSA has been compared with other latest and well-known metaheuristic algorithms. The consequences show that IMVOSA produces highly successful and competitive results.


Author(s):  
Ahmed I. Taloba , Et. al.

Clustering is a process of randomly selecting k-cluster centers also grouping the data around those centers. Issues of data clustering have recently received research attention and as such, a nature-based optimization algorithm called Black Hole (BH) has said to be suggested as an arrangement to data clustering issues. The BH as a metaheuristic which is elicited from public duplicates the black hole event in the universe, whereas circling arrangement in the hunt space addresses a solo star. Even though primordial BH has shown enhanced execution taking place standard datasets, it doesn't have investigation capacities yet plays out a fine local search. In this paper, another crossover metaheuristic reliant on the mix of BH algorithm as well as genetic algorithm suggested. Genetic algorithm represents its first part of the algorithm which prospects the search space and provides the initial positions for the stars. Then, the BH algorithm utilizes the search space and finds the best solution until the termination condition is reached. The proposed hybrid approach was estimated on synchronized nine popular standard functions where the outcomes indicated that the process generated enhanced outcome with regard to robustness compared to BH and the benchmarking algorithms in the study. Furthermore, it also revealed a high convergence rate which used six real datasets sourced of the UCI machine learning laboratory, indicating fine conduct of the hybrid algorithm on data clustering problems. Conclusively, the investigation showed the suitability of the suggested hybrid algorithm designed for resolving data clustering issues.


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Matthew L. Hall ◽  
Stephanie De Anda

Purpose The purposes of this study were (a) to introduce “language access profiles” as a viable alternative construct to “communication mode” for describing experience with language input during early childhood for deaf and hard-of-hearing (DHH) children; (b) to describe the development of a new tool for measuring DHH children's language access profiles during infancy and toddlerhood; and (c) to evaluate the novelty, reliability, and validity of this tool. Method We adapted an existing retrospective parent report measure of early language experience (the Language Exposure Assessment Tool) to make it suitable for use with DHH populations. We administered the adapted instrument (DHH Language Exposure Assessment Tool [D-LEAT]) to the caregivers of 105 DHH children aged 12 years and younger. To measure convergent validity, we also administered another novel instrument: the Language Access Profile Tool. To measure test–retest reliability, half of the participants were interviewed again after 1 month. We identified groups of children with similar language access profiles by using hierarchical cluster analysis. Results The D-LEAT revealed DHH children's diverse experiences with access to language during infancy and toddlerhood. Cluster analysis groupings were markedly different from those derived from more traditional grouping rules (e.g., communication modes). Test–retest reliability was good, especially for the same-interviewer condition. Content, convergent, and face validity were strong. Conclusions To optimize DHH children's developmental potential, stakeholders who work at the individual and population levels would benefit from replacing communication mode with language access profiles. The D-LEAT is the first tool that aims to measure this novel construct. Despite limitations that future work aims to address, the present results demonstrate that the D-LEAT represents progress over the status quo.


Sign in / Sign up

Export Citation Format

Share Document