Research on Improved K-Means Clustering Algorithm

2011 ◽  
Vol 403-408 ◽  
pp. 1977-1980
Author(s):  
Yin Sheng Zhang ◽  
Hui Lin Shan ◽  
Jia Qiang Li ◽  
Jie Zhou

The traditional K-means clustering algorithm prematurely plunges into a local optimum because of sensitive selection of the initial cluster center. Hierarchical clustering algorithm can be used to generate the initial cluster center of K-means clustering algorithm. The geometric features of input data can achieve a good distribution by means of pretreatment and feature extraction and selection. In the learning of fuzzy neural network, Java language is used to write source code of the algorithm. The experimental results show that new algorithm has improved the clustering quality effectively.

2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Ziqi Jia ◽  
Ling Song

The k-prototypes algorithm is a hybrid clustering algorithm that can process Categorical Data and Numerical Data. In this study, the method of initial Cluster Center selection was improved and a new Hybrid Dissimilarity Coefficient was proposed. Based on the proposed Hybrid Dissimilarity Coefficient, a weighted k-prototype clustering algorithm based on the hybrid dissimilarity coefficient was proposed (WKPCA). The proposed WKPCA algorithm not only improves the selection of initial Cluster Centers, but also puts a new method to calculate the dissimilarity between data objects and Cluster Centers. The real dataset of UCI was used to test the WKPCA algorithm. Experimental results show that WKPCA algorithm is more efficient and robust than other k-prototypes algorithms.


2014 ◽  
Vol 701-702 ◽  
pp. 88-93 ◽  
Author(s):  
Gang Tao ◽  
Yong Gang Yan ◽  
Jiao Zou ◽  
Jun Liu

In order to solve the problem of continuous attribute discretization, a new improved SOM clustering algorithm was proposed. The algorithm uses the SOM to achieve the initial cluster and get the clustering up limit, then treats the initial cluster centers as samples and use the BIRCH hierarchical clustering algorithm to get secondary clustering, then solves the problems of inflated clusters and identifies discrete breakpoints set. Finally, find the nearest neighbors of each cluster center among these any samples of Breakpoints sets which belong to its attribute, and use it as a basis of discrete trimming. The experimental results show that the proposed algorithm outperforms the conventional discrete SOM clustering algorithm in the breakpoints set (contour factor to enhance 75%) and discrete accuracy (incompatible degrees closer to 0) aspects.


2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Ning Li ◽  
Yunxia Gu ◽  
Zhongliang Deng

A small amount of prior knowledge and randomly chosen initial cluster centers have a direct impact on the accuracy of the performance of iterative clustering algorithm. In this paper we propose a new algorithm to compute initial cluster centers for k-means clustering and the best number of the clusters with little prior knowledge and optimize clustering result. It constructs the Euclidean distance control factor based on aggregation density sparse degree to select the initial cluster center of nonuniform sparse data and obtains initial data clusters by multidimensional diffusion density distribution. Multiobjective clustering approach based on dynamic cumulative entropy is adopted to optimize the initial data clusters and the best number of the clusters. The experimental results show that the newly proposed algorithm has good performance to obtain the initial cluster centers for the k-means algorithm and it effectively improves the clustering accuracy of nonuniform sparse data by about 5%.


2010 ◽  
Vol 29-32 ◽  
pp. 802-808
Author(s):  
Min Min

On analyzing the common problems in fuzzy clustering algorithms, we put forward the combined fuzzy clustering one, which will automatically generate a reasonable clustering numbers and initial cluster center. This clustering algorithm has been tested by real evaluation data of teaching designs. The result proves that the combined fuzzy clustering based on F-statistic is more effective.


2019 ◽  
Vol 13 (4) ◽  
pp. 403-409
Author(s):  
Hui Qi ◽  
Jinqing Li ◽  
Xiaoqiang Di ◽  
Weiwu Ren ◽  
Fengrong Zhang

Background: K-means algorithm is implemented through two steps: initialization and subsequent iterations. Initialization is to select the initial cluster center, while subsequent iterations are to continuously change the cluster center until it won't change any more or the number of iterations reaches its maximum. K-means algorithm is so sensitive to the cluster center selected during initialization that the selection of a different initial cluster center will influence the algorithm performance. Therefore, improving the initialization process has become an important means of K-means performance improvement. Methods: This paper uses a new strategy to select the initial cluster center. It first calculates the minimum and maximum values of the data in a certain index (For lower-dimensional data, such as twodimensional data, features with larger variance, or the distance to the origin can be selected; for higher-dimensional data, PCA can be used to select the principal component with the largest variance), and then divides the range into equally-sized sub-ranges. Next adjust the sub-ranges based on the data distribution so that each sub-range contains as much data as possible. Finally, the mean value of the data in each sub-range is calculated and used as the initial clustering center. Results: The theoretical analysis shows that although the time complexity of the initialization process is linear, the algorithm has the characteristics of the superlinear initialization method. This algorithm is applied to two-dimensional GPS data analysis and high-dimensional network attack detection. Experimental results show that this algorithm achieves high clustering performance and clustering speed. Conclusion: This paper reduces the subsequent iterations of K-means algorithm without compromising the clustering performance, which makes it suitable for large-scale data clustering. This algorithm can not only be applied to low-dimensional data clustering, but also suitable for highdimensional data.


2010 ◽  
Vol 108-111 ◽  
pp. 106-111 ◽  
Author(s):  
Tian Zhen Wang ◽  
Yang Liu ◽  
Tian Hao Tang

In order to solve the problem in k-means algorithm that inappropriate selection of initial clustering centers often causes clustering in local optimum and the time complexity is too high when handling large amounts of data, a fusion clustering algorithm based on geometry is proposed in this paper. The result of experiments shows this algorithm is better than the traditional k-means and the k-means++ algorithms, with higher quality and faster speed. And at last in this paper, we apply it in marine engineering.


2014 ◽  
Vol 998-999 ◽  
pp. 873-877
Author(s):  
Zhen Bo Wang ◽  
Bao Zhi Qiu

To reduce the impact of irrelevant attributes on clustering results, and improve the importance of relevant attributes to clustering, this paper proposes fuzzy C-means clustering algorithm based on coefficient of variation (CV-FCM). In the algorithm, coefficient of variation is used to weigh attributes so as to assign different weights to each attribute in the data set, and the magnitude of weight is used to express the importance of different attributes to clusters. In addition, for the characteristic of fuzzy C-means clustering algorithm that it is susceptible to initial cluster center value, the method for the selection of initial cluster center based on maximum distance is introduced on the basis of weighted coefficient of variation. The result of the experiment based on real data sets shows that this algorithm can select cluster center effectively, with the clustering result superior to general fuzzy C-means clustering algorithms.


2014 ◽  
Vol 31 (8) ◽  
pp. 1661-1667 ◽  
Author(s):  
Minchen Zhu ◽  
Weizhi Wang ◽  
Jingshan Huang

Purpose – It is well known that the selection of initial cluster centers can significantly affect K-means clustering results. The purpose of this paper is to propose an improved, efficient methodology to handle such a challenge. Design/methodology/approach – According to the fact that the inner-class distance among samples within the same cluster is supposed to be smaller than the inter-class distance among clusters, the algorithm will dynamically adjust initial cluster centers that are randomly selected. Consequently, such adjusted initial cluster centers will be highly representative in the sense that they are distributed among as many samples as possible. As a result, local optima that are common in K-means clustering can then be effectively reduced. In addition, the algorithm is able to obtain all initial cluster centers simultaneously (instead of one center at a time) during the dynamic adjustment. Findings – Experimental results demonstrate that the proposed algorithm greatly improves the accuracy of traditional K-means clustering results and, in a more efficient manner. Originality/value – The authors presented in this paper an efficient algorithm, which is able to dynamically adjust initial cluster centers that are randomly selected. The adjusted centers are highly representative, i.e. they are distributed among as many samples as possible. As a result, local optima that are common in K-means clustering can be effectively reduced so that the authors can achieve an improved clustering accuracy. In addition, the algorithm is a cost-efficient one and the enhanced clustering accuracy can be obtained in a more efficient manner compared with traditional K-means algorithm.


2020 ◽  
Vol 39 (5) ◽  
pp. 7259-7279
Author(s):  
Xingguang Pan ◽  
Shitong Wang

The feature reduction fuzzy c-means (FRFCM) algorithm has been proven to be effective for clustering data with redundant/unimportant feature(s). However, the FRFCM algorithm still has the following disadvantages. 1) The FRFCM uses the mean-to-variance-ratio (MVR) index to measure the feature importance of a dataset, but this index is affected by data normalization, i.e., a large MVR value of original feature(s) may become small if the data are normalized, and vice versa. Moreover, the MVR value(s) of the important feature(s) of a dataset may not necessarily be large. 2) The feature weights obtained by the FRFCM are sensitive to the initial cluster centers and initial feature weights. 3) The FRFCM algorithm may be unable to assign the proper weights to the features of a dataset. Thus, in the feature reduction learning process, important features may be discarded, but unimportant features may be retained. These disadvantages can cause the FRFCM algorithm to discard important feature components. In addition, the threshold for the selection of the important feature(s) of the FRFCM may not be easy to determine. To mitigate the disadvantages of the FRFCM algorithm, we first devise a new index, named the marginal kurtosis measure (MKM), to measure the importance of each feature in a dataset. Then, a novel and robust feature reduction fuzzy c-means clustering algorithm called the FRFCM-MKM, which incorporates the marginal kurtosis measure into the FRFCM, is proposed. Furthermore, an accurate threshold is introduced to select important feature(s) and discard unimportant feature(s). Experiments on synthetic and real-world datasets demonstrate that the FRFCM-MKM is effective and efficient.


Sign in / Sign up

Export Citation Format

Share Document