Low-Rank and Sparse Multi-task Learning

Author(s):  
Jianhui Chen ◽  
Jiayu Zhou ◽  
Jieping Ye
Keyword(s):  
Low Rank ◽  
2018 ◽  
Vol 35 (10) ◽  
pp. 1797-1798 ◽  
Author(s):  
Han Cao ◽  
Jiayu Zhou ◽  
Emanuel Schwarz

Abstract Motivation Multi-task learning (MTL) is a machine learning technique for simultaneous learning of multiple related classification or regression tasks. Despite its increasing popularity, MTL algorithms are currently not available in the widely used software environment R, creating a bottleneck for their application in biomedical research. Results We developed an efficient, easy-to-use R library for MTL (www.r-project.org) comprising 10 algorithms applicable for regression, classification, joint predictor selection, task clustering, low-rank learning and incorporation of biological networks. We demonstrate the utility of the algorithms using simulated data. Availability and implementation The RMTL package is an open source R package and is freely available at https://github.com/transbioZI/RMTL. RMTL will also be available on cran.r-project.org. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Chi Su ◽  
Fan Yang ◽  
Shiliang Zhang ◽  
Qi Tian ◽  
Larry S. Davis ◽  
...  
Keyword(s):  
Low Rank ◽  

2019 ◽  
Vol 11 (2) ◽  
pp. 150 ◽  
Author(s):  
Xing Wu ◽  
Xia Zhang ◽  
Nan Wang ◽  
Yi Cen

Target detection is an active area in hyperspectral imagery (HSI) processing. Many algorithms have been proposed for the past decades. However, the conventional detectors mainly benefit from the spectral information without fully exploiting the spatial structures of HSI. Besides, they primarily use all bands information and ignore the inter-band redundancy. Moreover, they do not make full use of the difference between the background and target samples. To alleviate these problems, we proposed a novel joint sparse and low-rank multi-task learning (MTL) with extended multi-attribute profile (EMAP) algorithm (MTJSLR-EMAP). Briefly, the spatial features of HSI were first extracted by morphological attribute filters. Then the MTL was exploited to reduce band redundancy and retain the discriminative information simultaneously. Considering the distribution difference between the background and target samples, the target and background pixels were separately modeled with different regularization terms. In each task, a background pixel can be low-rank represented by the background samples while a target pixel can be sparsely represented by the target samples. Finally, the proposed algorithm was compared with six detectors including constrained energy minimization (CEM), adaptive coherence estimator (ACE), hierarchical CEM (hCEM), sparsity-based detector (STD), joint sparse representation and MTL detector (JSR-MTL), independent encoding JSR-MTL (IEJSR-MTL) on three datasets. Corresponding to each competitor, it has the average detection performance improvement of about 19.94%, 22.53%, 16.92%, 14.87%, 14.73%, 4.21% respectively. Extensive experimental results demonstrated that MTJSLR-EMAP outperforms several state-of-the-art algorithms.


2018 ◽  
Vol 40 (5) ◽  
pp. 1167-1181 ◽  
Author(s):  
Chi Su ◽  
Fan Yang ◽  
Shiliang Zhang ◽  
Qi Tian ◽  
Larry Steven Davis ◽  
...  
Keyword(s):  
Low Rank ◽  

Author(s):  
Lu Sun ◽  
Canh Hao Nguyen ◽  
Hiroshi Mamitsuka

Multi-view multi-task learning refers to dealing with dual-heterogeneous data,where each sample has multi-view features,and multiple tasks are correlated via common views.Existing methods do not sufficiently address three key challenges:(a) saving task correlation efficiently, (b) building a sparse model and (c) learning view-wise weights.In this paper, we propose a new method to directly handle these challenges based on multiplicative sparse feature decomposition.For (a), the weight matrix is decomposed into two components via low-rank constraint matrix factorization, which saves task correlation by learning a reduced number of model parameters.For (b) and (c), the first component is further decomposed into two sub-components,to select topic-specific features and learn view-wise importance, respectively. Theoretical analysis reveals its equivalence with a general form of joint regularization,and motivates us to develop a fast optimization algorithm in a linear complexity w.r.t. the data size.Extensive experiments on both simulated and real-world datasets validate its efficiency.


Sign in / Sign up

Export Citation Format

Share Document