scholarly journals Maximum Discriminant Difference Criterion for Dimensionality Reduction of Tensor Data

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 193593-193607
Author(s):  
Xinya Peng ◽  
Zhengming Ma ◽  
Haowei Xu
2021 ◽  
pp. 1-18
Author(s):  
Ting Gao ◽  
Zhengming Ma ◽  
Wenxu Gao ◽  
Shuyu Liu

There are three contributions in this paper. (1) A tensor version of LLE (short for Local Linear Embedding algorithm) is deduced and presented. LLE is the most famous manifold learning algorithm. Since its proposal, various improvements to LLE have kept emerging without interruption. However, all these achievements are only suitable for vector data, not tensor data. The proposed tensor LLE can also be used a bridge for various improvements to LLE to transfer from vector data to tensor data. (2) A framework of tensor dimensionality reduction based on tensor mode product is proposed, in which the mode matrices can be determined according to specific criteria. (3) A novel dimensionality reduction algorithm for tensor data based on LLE and mode product (LLEMP-TDR) is proposed, in which LLE is used as a criterion to determine the mode matrices. Benefiting from local LLE and global mode product, the proposed LLEMP-TDR can preserve both local and global features of high-dimensional tenser data during dimensionality reduction. The experimental results on data clustering and classification tasks demonstrate that our method performs better than 5 other related algorithms published recently in top academic journals.


2020 ◽  
Vol 16 (4) ◽  
pp. 155014772091640
Author(s):  
Chenquan Gan ◽  
Junwei Mao ◽  
Zufan Zhang ◽  
Qingyi Zhu

Tensor compression algorithms play an important role in the processing of multidimensional signals. In previous work, tensor data structures are usually destroyed by vectorization operations, resulting in information loss and new noise. To this end, this article proposes a tensor compression algorithm using Tucker decomposition and dictionary dimensionality reduction, which mainly includes three parts: tensor dictionary representation, dictionary preprocessing, and dictionary update. Specifically, the tensor is respectively performed by the sparse representation and Tucker decomposition, from which one can obtain the dictionary, sparse coefficient, and core tensor. Furthermore, the sparse representation can be obtained through the relationship between sparse coefficient and core tensor. In addition, the dimensionality of the input tensor is reduced by using the concentrated dictionary learning. Finally, some experiments show that, compared with other algorithms, the proposed algorithm has obvious advantages in preserving the original data information and denoising ability.


2021 ◽  
Vol 40 (5) ◽  
pp. 10307-10322
Author(s):  
Weichao Gan ◽  
Zhengming Ma ◽  
Shuyu Liu

Tensor data are becoming more and more common in machine learning. Compared with vector data, the curse of dimensionality of tensor data is more serious. The motivation of this paper is to combine Hilbert-Schmidt Independence Criterion (HSIC) and tensor algebra to create a new dimensionality reduction algorithm for tensor data. There are three contributions in this paper. (1) An HSIC-based algorithm is proposed in which the dimension-reduced tensor is determined by maximizing HSIC between the dimension-reduced and high-dimensional tensors. (2) A tensor algebra-based algorithm is proposed, in which the high-dimensional tensor are projected onto a subspace and the projection coordinate is set to be the dimension-reduced tensor. The subspace is determined by minimizing the distance between the high-dimensional tensor data and their projection in the subspace. (3) By combining the above two algorithms, a new dimensionality reduction algorithm, called PDMHSIC, is proposed, in which the dimensionality reduction must satisfy two criteria at the same time: HSIC maximization and subspace projection distance minimization. The proposed algorithm is a new attempt to combine HSIC with other algorithms to create new algorithms and has achieved better experimental results on 8 commonly-used datasets than the other 7 well-known algorithms.


Author(s):  
Htay Htay Win ◽  
Aye Thida Myint ◽  
Mi Cho Cho

For years, achievements and discoveries made by researcher are made aware through research papers published in appropriate journals or conferences. Many a time, established s researcher and mainly new user are caught up in the predicament of choosing an appropriate conference to get their work all the time. Every scienti?c conference and journal is inclined towards a particular ?eld of research and there is a extensive group of them for any particular ?eld. Choosing an appropriate venue is needed as it helps in reaching out to the right listener and also to further one’s chance of getting their paper published. In this work, we address the problem of recommending appropriate conferences to the authors to increase their chances of receipt. We present three di?erent approaches for the same involving the use of social network of the authors and the content of the paper in the settings of dimensionality reduction and topic modelling. In all these approaches, we apply Correspondence Analysis (CA) to obtain appropriate relationships between the entities in question, such as conferences and papers. Our models show hopeful results when compared with existing methods such as content-based ?ltering, collaborative ?ltering and hybrid ?ltering.


2013 ◽  
Vol 38 (4) ◽  
pp. 465-470 ◽  
Author(s):  
Jingjie Yan ◽  
Xiaolan Wang ◽  
Weiyi Gu ◽  
LiLi Ma

Abstract Speech emotion recognition is deemed to be a meaningful and intractable issue among a number of do- mains comprising sentiment analysis, computer science, pedagogy, and so on. In this study, we investigate speech emotion recognition based on sparse partial least squares regression (SPLSR) approach in depth. We make use of the sparse partial least squares regression method to implement the feature selection and dimensionality reduction on the whole acquired speech emotion features. By the means of exploiting the SPLSR method, the component parts of those redundant and meaningless speech emotion features are lessened to zero while those serviceable and informative speech emotion features are maintained and selected to the following classification step. A number of tests on Berlin database reveal that the recogni- tion rate of the SPLSR method can reach up to 79.23% and is superior to other compared dimensionality reduction methods.


2009 ◽  
Vol 19 (11) ◽  
pp. 2908-2920
Author(s):  
De-Yu MENG ◽  
Nan-Nan GU ◽  
Zong-Ben XU ◽  
Yee LEUNG

Sign in / Sign up

Export Citation Format

Share Document