Optimization Study of Personalized Information Recommendation Model Based on Tensor Decomposition

2014 ◽  
Vol 1042 ◽  
pp. 228-231
Author(s):  
Hong Fei Sun ◽  
Xiao Dang Liu

Based on the tensor decomposition especially pyramid decomposition method in matrix model, according to the high operation complexity of TD model, the author arises pairwise interaction tensor factorization (PITF) method to optimize it. And the tag recommendation, for example, this article simulate the interaction between all the labels on items a user tagging. The results show that on achieving expected quality test, PITF has obvious advantages than TD and CD at running time.

2021 ◽  
pp. 2150027
Author(s):  
Junlan Nie ◽  
Ruibo Gao ◽  
Ye Kang

Prediction of urban noise is becoming more significant for tackling noise pollution and protecting human mental health. However, the existing noise prediction algorithms neglected not only the correlation between noise regions, but also the nonlinearity and sparsity of the data, which resulted in low accuracy of filling in the missing entries of data. In this paper, we propose a model based on multiple views and kernel-matrix tensor decomposition to predict the noise situation at different times of day in each region. We first construct a kernel tensor decomposition model by using kernel mapping in order to speed decomposition rate and realize stable estimate the prediction system. Then, we analyze and compute the cause of the noise from multiple views including computing the similarity of regions and the correlation between noise categories by kernel distance, which improves the credibility to infer the noise situation and the categories of regions. Finally, we devise a prediction algorithm based on the kernel-matrix tensor factorization model. We evaluate our method with a real dataset, and the experiments to verify the advantages of our method compared with other existing baselines.


Author(s):  
Yunpeng Chen ◽  
Xiaojie Jin ◽  
Bingyi Kang ◽  
Jiashi Feng ◽  
Shuicheng Yan

The residual unit and its variations are wildly used in building very deep neural networks for alleviating optimization difficulty. In this work, we revisit the standard residual function as well as its several successful variants and propose a unified framework based on tensor Block Term Decomposition (BTD) to explain these apparently different residual functions from the tensor decomposition view. With the BTD framework, we further propose a novel basic network architecture, named the Collective Residual Unit (CRU). CRU further enhances parameter efficiency of deep residual neural networks by sharing core factors derived from collective tensor factorization over the involved residual units. It enables efficient knowledge sharing across multiple residual units, reduces the number of model parameters, lowers the risk of over-fitting, and provides better generalization ability. Extensive experimental results show that our proposed CRU network brings outstanding parameter efficiency -- it achieves comparable classification performance with ResNet-200 while using a model size as small as ResNet-50 on the ImageNet-1k and Places365-Standard benchmark datasets.


2020 ◽  
Vol 36 (Supplement_1) ◽  
pp. i417-i426
Author(s):  
Assya Trofimov ◽  
Joseph Paul Cohen ◽  
Yoshua Bengio ◽  
Claude Perreault ◽  
Sébastien Lemieux

Abstract Motivation The recent development of sequencing technologies revolutionized our understanding of the inner workings of the cell as well as the way disease is treated. A single RNA sequencing (RNA-Seq) experiment, however, measures tens of thousands of parameters simultaneously. While the results are information rich, data analysis provides a challenge. Dimensionality reduction methods help with this task by extracting patterns from the data by compressing it into compact vector representations. Results We present the factorized embeddings (FE) model, a self-supervised deep learning algorithm that learns simultaneously, by tensor factorization, gene and sample representation spaces. We ran the model on RNA-Seq data from two large-scale cohorts and observed that the sample representation captures information on single gene and global gene expression patterns. Moreover, we found that the gene representation space was organized such that tissue-specific genes, highly correlated genes as well as genes participating in the same GO terms were grouped. Finally, we compared the vector representation of samples learned by the FE model to other similar models on 49 regression tasks. We report that the representations trained with FE rank first or second in all of the tasks, surpassing, sometimes by a considerable margin, other representations. Availability and implementation A toy example in the form of a Jupyter Notebook as well as the code and trained embeddings for this project can be found at: https://github.com/TrofimovAssya/FactorizedEmbeddings. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document