scholarly journals Hypergraph-Supervised Deep Subspace Clustering

Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3259
Author(s):  
Yu Hu ◽  
Hongmin Cai

Auto-encoder (AE)-based deep subspace clustering (DSC) methods aim to partition high-dimensional data into underlying clusters, where each cluster corresponds to a subspace. As a standard module in current AE-based DSC, the self-reconstruction cost plays an essential role in regularizing the feature learning. However, the self-reconstruction adversely affects the discriminative feature learning of AE, thereby hampering the downstream subspace clustering. To address this issue, we propose a hypergraph-supervised reconstruction to replace the self-reconstruction. Specifically, instead of enforcing the decoder in the AE to merely reconstruct samples themselves, the hypergraph-supervised reconstruction encourages reconstructing samples according to their high-order neighborhood relations. By the back-propagation training, the hypergraph-supervised reconstruction cost enables the deep AE to capture the high-order structure information among samples, facilitating the discriminative feature learning and, thus, alleviating the adverse effect of the self-reconstruction cost. Compared to current DSC methods, relying on the self-reconstruction, our method has achieved consistent performance improvement on benchmark high-dimensional datasets.

Author(s):  
Changsheng Li ◽  
Lin Yang ◽  
Qingshan Liu ◽  
Fanjing Meng ◽  
Weishan Dong ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Binbin Zhang ◽  
Weiwei Wang ◽  
Xiangchu Feng

Subspace clustering aims to group a set of data from a union of subspaces into the subspace from which it was drawn. It has become a popular method for recovering the low-dimensional structure underlying high-dimensional dataset. The state-of-the-art methods construct an affinity matrix based on the self-representation of the dataset and then use a spectral clustering method to obtain the final clustering result. These methods show that sparsity and grouping effect of the affinity matrix are important in recovering the low-dimensional structure. In this work, we propose a weighted sparse penalty and a weighted grouping effect penalty in modeling the self-representation of data points. The experimental results on Extended Yale B, USPS, and Berkeley 500 image segmentation datasets show that the proposed model is more effective than state-of-the-art methods in revealing the subspace structure underlying high-dimensional dataset.


Author(s):  
Jianan Zhao ◽  
Xiao Wang ◽  
Chuan Shi ◽  
Zekuan Liu ◽  
Yanfang Ye

As heterogeneous networks have become increasingly ubiquitous, Heterogeneous Information Network (HIN) embedding, aiming to project nodes into a low-dimensional space while preserving the heterogeneous structure, has drawn increasing attention in recent years. Many of the existing HIN embedding methods adopt meta-path guided random walk to retain both the semantics and structural correlations between different types of nodes. However, the selection of meta-paths is still an open problem, which either depends on domain knowledge or is learned from label information. As a uniform blueprint of HIN, the network schema comprehensively embraces the high-order structure and contains rich semantics. In this paper, we make the first attempt to study network schema preserving HIN embedding, and propose a novel model named NSHE. In NSHE, a network schema sampling method is first proposed to generate sub-graphs (i.e., schema instances), and then multi-task learning task is built to preserve the heterogeneous structure of each schema instance. Besides preserving pairwise structure information, NSHE is able to retain high-order structure (i.e., network schema). Extensive experiments on three real-world datasets demonstrate that our proposed model NSHE significantly outperforms the state-of-the-art methods.


Author(s):  
Chun Cheng ◽  
Wei Zou ◽  
Weiping Wang ◽  
Michael Pecht

Deep neural networks (DNNs) have shown potential in intelligent fault diagnosis of rotating machinery. However, traditional DNNs such as the back-propagation neural network are highly sensitive to the initial weights and easily fall into the local optimum, which restricts the feature learning capability and diagnostic performance. To overcome the above problems, a deep sparse filtering network (DSFN) constructed by stacked sparse filtering is developed in this paper and applied to fault diagnosis. The developed DSFN is pre-trained by sparse filtering in an unsupervised way. The back-propagation algorithm is employed to optimize the DSFN after pre-training. Then, the DSFN-based intelligent fault diagnosis method is validated using two experiments. The results show that pre-training with sparse filtering and fine-tuning can help the DSFN search for the optimal network parameters, and the DSFN can learn discriminative features adaptively from rotating machinery datasets. Compared with classical methods, the developed diagnostic method can diagnose rotating machinery faults with higher accuracy using fewer training samples.


2020 ◽  
Vol 130 ◽  
pp. 253-268
Author(s):  
Aluizio F.R. Araújo ◽  
Victor O. Antonino ◽  
Karina L. Ponce-Guevara

2021 ◽  
Vol 40 (3) ◽  
Author(s):  
Bo Hou ◽  
Yongbin Ge

AbstractIn this paper, by using the local one-dimensional (LOD) method, Taylor series expansion and correction for the third derivatives in the truncation error remainder, two high-order compact LOD schemes are established for solving the two- and three- dimensional advection equations, respectively. They have the fourth-order accuracy in both time and space. By the von Neumann analysis method, it shows that the two schemes are unconditionally stable. Besides, the consistency and convergence of them are also proved. Finally, numerical experiments are given to confirm the accuracy and efficiency of the present schemes.


Author(s):  
Fumiya Akasaka ◽  
Kazuki Fujita ◽  
Yoshiki Shimomura

This paper proposes the PSS Business Case Map as a tool to support designers’ idea generation in PSS design. The map visualizes the similarities among PSS business cases in a two-dimensional diagram. To make the map, PSS business cases are first collected by conducting, for example, a literature survey. The collected business cases are then classified from multiple aspects that characterize each case such as its product type, service type, target customer, and so on. Based on the results of this classification, the similarities among the cases are calculated and visualized by using the Self-Organizing Map (SOM) technique. A SOM is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional) view from high-dimensional data. The visualization result is offered to designers in a form of a two-dimensional map, which is called the PSS Business Case Map. By using the map, designers can figure out the position of their current business and can acquire ideas for the servitization of their business.


Sign in / Sign up

Export Citation Format

Share Document