kernel clustering
Recently Published Documents


TOTAL DOCUMENTS

168
(FIVE YEARS 43)

H-INDEX

16
(FIVE YEARS 4)

2021 ◽  
Vol 2082 (1) ◽  
pp. 012021
Author(s):  
Bingsen Guo

Abstract Data classification is one of the most critical issues in data mining with a large number of real-life applications. In many practical classification issues, there are various forms of anomalies in the real dataset. For example, the training set contains outliers, often enough to confuse the classifier and reduce its ability to learn from the data. In this paper, we propose a new data classification improvement approach based on kernel clustering. The proposed method can improve the classification performance by optimizing the training set. We first use the existing kernel clustering method to cluster the training set and optimize it based on the similarity between the training samples in each class and the corresponding class center. Then, the optimized reliable training set is trained to the standard classifier in the kernel space to classify each query sample. Extensive performance analysis shows that the proposed method achieves high performance, thus improving the classifier’s effectiveness.


Author(s):  
Zhe Xue ◽  
Junping Du ◽  
Changwei Zheng ◽  
Jie Song ◽  
Wenqi Ren ◽  
...  

Incomplete multi-view clustering aims to cluster samples with missing views, which has drawn more and more research interest. Although several methods have been developed for incomplete multi-view clustering, they fail to extract and exploit the comprehensive global and local structure of multi-view data, so their clustering performance is limited. This paper proposes a Clustering-induced Adaptive Structure Enhancing Network (CASEN) for incomplete multi-view clustering, which is an end-to-end trainable framework that jointly conducts multi-view structure enhancing and data clustering. Our method adopts multi-view autoencoder to infer the missing features of the incomplete samples. Then, we perform adaptive graph learning and graph convolution on the reconstructed complete multi-view data to effectively extract data structure. Moreover, we use multiple kernel clustering to integrate the global and local structure for clustering, and the clustering results in turn are used to enhance the data structure. Extensive experiments on several benchmark datasets demonstrate that our method can comprehensively obtain the structure of incomplete multi-view data and achieve superior performance compared to the other methods.


Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1680
Author(s):  
Hui Chen ◽  
Kunpeng Xu ◽  
Lifei Chen ◽  
Qingshan Jiang

Kernel clustering of categorical data is a useful tool to process the separable datasets and has been employed in many disciplines. Despite recent efforts, existing methods for kernel clustering remain a significant challenge due to the assumption of feature independence and equal weights. In this study, we propose a self-expressive kernel subspace clustering algorithm for categorical data (SKSCC) using the self-expressive kernel density estimation (SKDE) scheme, as well as a new feature-weighted non-linear similarity measurement. In the SKSCC algorithm, we propose an effective non-linear optimization method to solve the clustering algorithm’s objective function, which not only considers the relationship between attributes in a non-linear space but also assigns a weight to each attribute in the algorithm to measure the degree of correlation. A series of experiments on some widely used synthetic and real-world datasets demonstrated the better effectiveness and efficiency of the proposed algorithm compared with other state-of-the-art methods, in terms of non-linear relationship exploration among attributes.


2021 ◽  
Vol 27 (3) ◽  
pp. 57-70
Author(s):  
Damjan M. Rakanovic ◽  
Vuk Vranjkovic ◽  
Rastislav J. R. Struharik

Paper proposes a two-step Convolutional Neural Network (CNN) pruning algorithm and resource-efficient Field-programmable gate array (FPGA) CNN accelerator named “Argus”. The proposed CNN pruning algorithm first combines similar kernels into clusters, which are then pruned using the same regular pruning pattern. The pruning algorithm is carefully tailored for FPGAs, considering their resource characteristics. Regular sparsity results in high Multiply-accumulate (MAC) efficiency, reducing the amount of logic required to balance workloads among different MAC units. As a result, the Argus accelerator requires about 170 Look-up tables (LUTs) per Digital Signal Processor (DSP) block. This number is close to the average LUT/DPS ratio for various FPGA families, enabling balanced resource utilization when implementing Argus. Benchmarks conducted using Xilinx Zynq Ultrascale + Multi-Processor System-on-Chip (MPSoC) indicate that Argus is achieving up to 25 times higher frames per second than NullHop, 2 and 2.5 times higher than NEURAghe and Snowflake, respectively, and 2 times higher than NVDLA. Argus shows comparable performance to MIT’s Eyeriss v2 and Caffeine, requiring up to 3 times less memory bandwidth and utilizing 4 times fewer DSP blocks, respectively. Besides the absolute performance, Argus has at least 1.3 and 2 times better GOP/s/DSP and GOP/s/Block-RAM (BRAM) ratios, while being competitive in terms of GOP/s/LUT, compared to some of the state-of-the-art solutions.


2021 ◽  
Vol 547 ◽  
pp. 289-306
Author(s):  
Zhenwen Ren ◽  
Haoyun Lei ◽  
Quansen Sun ◽  
Chao Yang

Sign in / Sign up

Export Citation Format

Share Document