scholarly journals Incremental Multi-Domain Learning with Network Latent Tensor Factorization

2020 ◽  
Vol 34 (07) ◽  
pp. 10470-10477 ◽  
Author(s):  
Adrian Bulat ◽  
Jean Kossaifi ◽  
Georgios Tzimiropoulos ◽  
Maja Pantic

The prominence of deep learning, large amount of annotated data and increasingly powerful hardware made it possible to reach remarkable performance for supervised classification tasks, in many cases saturating the training sets. However the resulting models are specialized to a single very specific task and domain. Adapting the learned classification to new domains is a hard problem due to at least three reasons: (1) the new domains and the tasks might be drastically different; (2) there might be very limited amount of annotated data on the new domain and (3) full training of a new model for each new task is prohibitive in terms of computation and memory, due to the sheer number of parameters of deep CNNs. In this paper, we present a method to learn new-domains and tasks incrementally, building on prior knowledge from already learned tasks and without catastrophic forgetting. We do so by jointly parametrizing weights across layers using low-rank Tucker structure. The core is task agnostic while a set of task specific factors are learnt on each new domain. We show that leveraging tensor structure enables better performance than simply using matrix operations. Joint tensor modelling also naturally leverages correlations across different layers. Compared with previous methods which have focused on adapting each layer separately, our approach results in more compact representations for each new task/domain. We apply the proposed method to the 10 datasets of the Visual Decathlon Challenge and show that our method offers on average about 7.5× reduction in number of parameters and competitive performance in terms of both classification accuracy and Decathlon score.

Author(s):  
Clément Luneau ◽  
Jean Barbier ◽  
Nicolas Macris

Abstract We consider a statistical model for finite-rank symmetric tensor factorization and prove a single-letter variational expression for its asymptotic mutual information when the tensor is of even order. The proof applies the adaptive interpolation method originally invented for rank-one factorization. Here we show how to extend the adaptive interpolation to finite-rank and even-order tensors. This requires new non-trivial ideas with respect to the current analysis in the literature. We also underline where the proof falls short when dealing with odd-order tensors.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Jinzhi Liao ◽  
Jiuyang Tang ◽  
Xiang Zhao ◽  
Haichuan Shang

POI recommendation finds significant importance in various real-life applications, especially when meeting with location-based services, e.g., check-ins social networks. In this paper, we propose to solve POI recommendation through a novel model of dynamic tensor, which is among the first triumphs of its kind. In order to carry out timely recommendation, we predict POI by utilizing a completion algorithm based on fast low-rank tensor. Particularly, the dynamic tensor structure is complemented by the fast low-rank tensor completion algorithm so as to achieve prediction with better performance, where the parameter optimization is achieved by a pigeon-inspired heuristic algorithm. In short, our POI recommendation via the dynamic tensor method can take advantage of the intrinsic characteristics of check-ins data due to the multimode features such as current categories, subsequent categories, and temporal information as well as seasons variations are all integrated into the model. Extensive experiment results not only validate the superiority of our proposed method but also imply the application prospect in large-scale and real-time POI recommendation environment.


Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1187
Author(s):  
Peitao Wang ◽  
Zhaoshui He ◽  
Jun Lu ◽  
Beihai Tan ◽  
YuLei Bai ◽  
...  

Symmetric nonnegative matrix factorization (SNMF) approximates a symmetric nonnegative matrix by the product of a nonnegative low-rank matrix and its transpose. SNMF has been successfully used in many real-world applications such as clustering. In this paper, we propose an accelerated variant of the multiplicative update (MU) algorithm of He et al. designed to solve the SNMF problem. The accelerated algorithm is derived by using the extrapolation scheme of Nesterov and a restart strategy. The extrapolation scheme plays a leading role in accelerating the MU algorithm of He et al. and the restart strategy ensures that the objective function of SNMF is monotonically decreasing. We apply the accelerated algorithm to clustering problems and symmetric nonnegative tensor factorization (SNTF). The experiment results on both synthetic and real-world data show that it is more than four times faster than the MU algorithm of He et al. and performs favorably compared to recent state-of-the-art algorithms.


2016 ◽  
Vol 54 (6) ◽  
pp. 3410-3420 ◽  
Author(s):  
Frank de Morsier ◽  
Maurice Borgeaud ◽  
Volker Gass ◽  
Jean-Philippe Thiran ◽  
Devis Tuia

Sign in / Sign up

Export Citation Format

Share Document