Autoencoder, low rank approximation and pseudoinverse learning algorithm

Author(s):  
Ke Wang ◽  
Ping Guo ◽  
Xin Xin ◽  
Zebin Ye
Author(s):  
Tingting Ren ◽  
Xiuyi Jia ◽  
Weiwei Li ◽  
Shu Zhao

Label distribution learning (LDL) can be viewed as the generalization of multi-label learning. This novel paradigm focuses on the relative importance of different labels to a particular instance. Most previous LDL methods either ignore the correlation among labels, or only exploit the label correlations in a global way. In this paper, we utilize both the global and local relevance among labels to provide more information for training model and propose a novel label distribution learning algorithm. In particular, a label correlation matrix based on low-rank approximation is applied to capture the global label correlations. In addition, the label correlation among local samples are adopted to modify the label correlation matrix. The experimental results on real-world data sets show that the proposed algorithm outperforms state-of-the-art LDL methods.


Author(s):  
Jin Zhou

Abstract The acquisition of channel state information (CSI) is essential in millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems. The mmWave channel exhibits sparse scattering characteristics and a meaningful low-rank structure, which can be simultaneously employed to reduce the complexity of channel estimation. Most existing works recover the low-rank structure of channels using nuclear norm theory. However, solving the nuclear norm-based convex problem often leads to a suboptimal solution of the rank minimization problem, thus degrading the accuracy of channel estimation. Previous contributions recover the channel using over-complete dictionary with the assumption that the mmWave channel can be sparsely represented under some dictionary. While over-complete dictionary may increase the computational complexity. To address these problems, we propose a channel estimation framework based on non-convex low-rank approximation and dictionary learning by exploring the joint low-rank and sparse representations of wireless channels. We surrogate the widely used nuclear norm theory with non-convex low-rank approximation method and design a dictionary learning algorithm based on channel feature classification employing deep neural network (DNN). Our simulation results reveal the proposed scheme outperform the conventional dictionary learning algorithm, Bayesian framework algorithm, and compressed sensing-based algorithms.


2008 ◽  
Vol 20 (11) ◽  
pp. 2839-2861 ◽  
Author(s):  
Dit-Yan Yeung ◽  
Hong Chang ◽  
Guang Dai

In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.


2020 ◽  
Vol 14 (12) ◽  
pp. 2791-2798
Author(s):  
Xiaoqun Qiu ◽  
Zhen Chen ◽  
Saifullah Adnan ◽  
Hongwei He

2020 ◽  
Vol 6 ◽  
pp. 922-933
Author(s):  
M. Amine Hadj-Youcef ◽  
Francois Orieux ◽  
Alain Abergel ◽  
Aurelia Fraysse

2021 ◽  
Vol 11 (10) ◽  
pp. 4582
Author(s):  
Kensuke Tanioka ◽  
Satoru Hiwa

In the domain of functional magnetic resonance imaging (fMRI) data analysis, given two correlation matrices between regions of interest (ROIs) for the same subject, it is important to reveal relatively large differences to ensure accurate interpretation. However, clustering results based only on differences tend to be unsatisfactory and interpreting the features tends to be difficult because the differences likely suffer from noise. Therefore, to overcome these problems, we propose a new approach for dimensional reduction clustering. Methods: Our proposed dimensional reduction clustering approach consists of low-rank approximation and a clustering algorithm. The low-rank matrix, which reflects the difference, is estimated from the inner product of the difference matrix, not only from the difference. In addition, the low-rank matrix is calculated based on the majorize–minimization (MM) algorithm such that the difference is bounded within the range −1 to 1. For the clustering process, ordinal k-means is applied to the estimated low-rank matrix, which emphasizes the clustering structure. Results: Numerical simulations show that, compared with other approaches that are based only on differences, the proposed method provides superior performance in recovering the true clustering structure. Moreover, as demonstrated through a real-data example of brain activity measured via fMRI during the performance of a working memory task, the proposed method can visually provide interpretable community structures consisting of well-known brain functional networks, which can be associated with the human working memory system. Conclusions: The proposed dimensional reduction clustering approach is a very useful tool for revealing and interpreting the differences between correlation matrices, even when the true differences tend to be relatively small.


Sign in / Sign up

Export Citation Format

Share Document