Parallel Implementation of Improved K-Means Based on a Cloud Platform
Keyword(s):
Data Set
◽
In order to solve the problem of traditional K-Means clustering algorithm in dealing with large-scale data set, a Hadoop K-Means (referred to HKM) clustering algorithm is proposed. Firstly, according to the sample density, the algorithm eliminates the effects of noise points in the data set. Secondly, it optimizes the selection of the initial center point using the thought of the max-min distance. Finally, it uses a MapReduce programming model to realize the parallelization. Experimental results show that the proposed algorithm not only has high accuracy and stability in clustering results, but can also solve the problems of scalability encountered by traditional clustering algorithms in dealing with large scale data.
2021 ◽
Vol 15
(4)
◽
pp. 1-23
2021 ◽
Vol 15
(4)
◽
pp. 0-0
2009 ◽
Vol 28
(11)
◽
pp. 2737-2740