spectral embedding
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 43)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
Vol 96 ◽  
pp. 102179
Author(s):  
P.-R. Wagner ◽  
S. Marelli ◽  
I. Papaioannou ◽  
D. Straub ◽  
B. Sudret

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kaihong Zheng ◽  
Honghao Liang ◽  
Lukun Zeng ◽  
Xiaowei Chen ◽  
Sheng Li ◽  
...  

Multivariate electricity consumption series clustering can reflect trends of power consumption changes in the past time period, which can provide reliable guidance for electricity production. However, there are some abnormal series in the past multivariate electricity consumption series data, while outliers will affect the discovery of electricity consumption trends in different time periods. To address this problem, we propose a robust graph factorization model for multivariate electricity consumption clustering (RGF-MEC), which performs graph factorization and outlier discovery simultaneously. RGF-MEC first obtains a similarity graph by calculating distance among multivariate electricity consumption series data and then performs robust matrix factorization on the similarity graph. Meanwhile, the similarity graph is decomposed into a class-related embedding and a spectral embedding, where the class-related embedding directly reveals the final clustering results. Experimental results on realistic multivariate time-series datasets and multivariate electricity consumption series datasets demonstrate effectiveness of the proposed RGF-MEC model.


2021 ◽  
Vol 436 ◽  
pp. 110141
Author(s):  
Paul-Remo Wagner ◽  
Stefano Marelli ◽  
Bruno Sudret

2021 ◽  
Author(s):  
Xiaotong Zhang ◽  
Han Liu ◽  
Xiao-Ming Wu ◽  
Xianchao Zhang ◽  
Xinyue Liu

2021 ◽  
pp. 1-35
Author(s):  
Ketan Mehta ◽  
Rebecca F. Goldin ◽  
David Marchette ◽  
Joshua T. Vogelstein ◽  
Carey E. Priebe ◽  
...  

Abstract This work presents a novel strategy for classifying neurons, represented by nodes of a directed graph, based on their circuitry (edge connectivity). We assume a stochastic block model (SBM) in which neurons belong together if they connect to neurons of other groups according to the same probability distributions. Following adjacency spectral embedding of the SBM graph, we derive the number of classes and assign each neuron to a class with a Gaussian mixture model-based expectation-maximization (EM) clustering algorithm. To improve accuracy, we introduce a simple variation using random hierarchical agglomerative clustering to initialize the EM algorithm and picking the best solution over multiple EM restarts. We test this procedure on a large (≈212–215 neurons), sparse, biologically inspired connectome with eight neuron classes. The simulation results demonstrate that the proposed approach is broadly stable to the choice of embedding dimension, and scales extremely well as the number of neurons in the network increases. Clustering accuracy is robust to variations in model parameters and highly tolerant to simulated experimental noise, achieving perfect classifications with up to 40% of swapped edges. Thus, this approach may be useful to analyze and interpret large-scale brain connectomics data in terms of underlying cellular components.


2021 ◽  
Vol 7 ◽  
pp. e450
Author(s):  
Wenna Huang ◽  
Yong Peng ◽  
Yuan Ge ◽  
Wanzeng Kong

The Kmeans clustering and spectral clustering are two popular clustering methods for grouping similar data points together according to their similarities. However, the performance of Kmeans clustering might be quite unstable due to the random initialization of the cluster centroids. Generally, spectral clustering methods employ a two-step strategy of spectral embedding and discretization postprocessing to obtain the cluster assignment, which easily lead to far deviation from true discrete solution during the postprocessing process. In this paper, based on the connection between the Kmeans clustering and spectral clustering, we propose a new Kmeans formulation by joint spectral embedding and spectral rotation which is an effective postprocessing approach to perform the discretization, termed KMSR. Further, instead of directly using the dot-product data similarity measure, we make generalization on KMSR by incorporating more advanced data similarity measures and call this generalized model as KMSR-G. An efficient optimization method is derived to solve the KMSR (KMSR-G) model objective whose complexity and convergence are provided. We conduct experiments on extensive benchmark datasets to validate the performance of our proposed models and the experimental results demonstrate that our models perform better than the related methods in most cases.


Sign in / Sign up

Export Citation Format

Share Document