block coordinate descent
Recently Published Documents


TOTAL DOCUMENTS

106
(FIVE YEARS 42)

H-INDEX

14
(FIVE YEARS 3)

Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 113
Author(s):  
Rafał Zdunek ◽  
Krzysztof Fonał

Nonnegative Tucker decomposition (NTD) is a robust method used for nonnegative multilinear feature extraction from nonnegative multi-way arrays. The standard version of NTD assumes that all of the observed data are accessible for batch processing. However, the data in many real-world applications are not static or are represented by a large number of multi-way samples that cannot be processing in one batch. To tackle this problem, a dynamic approach to NTD can be explored. In this study, we extend the standard model of NTD to an incremental or online version, assuming volatility of observed multi-way data along one mode. We propose two computational approaches for updating the factors in the incremental model: one is based on the recursive update model, and the other uses the concept of the block Kaczmarz method that belongs to coordinate descent methods. The experimental results performed on various datasets and streaming data demonstrate high efficiently of both algorithmic approaches, with respect to the baseline NTD methods.


Author(s):  
Guanhua Ye ◽  
Hongzhi Yin ◽  
Tong Chen ◽  
Miao Xu ◽  
Quoc Viet Hung Nguyen ◽  
...  

2021 ◽  
Vol 2078 (1) ◽  
pp. 012012
Author(s):  
Song Yao ◽  
Lipeng Cui ◽  
Sining Ma

Abstract In recent years, the sparse model is a research hotspot in the field of artificial intelligence. Since the Lasso model ignores the group structure among variables, and can only achieve the selection of scattered variables. Besides, Group Lasso can only select groups of variables. To address this problem, the Sparse Group Log Ridge model is proposed, which can select both groups of variables and variables in one group. Then the MM algorithm combined with the block coordinate descent algorithm can be used for solving. Finally, the advantages of the model in terms of variables selection and prediction are shown through the experiment.


Author(s):  
Deqing Wang ◽  
Zheng Chang ◽  
Fengyu Cong

AbstractNonnegative tensor decomposition is a versatile tool for multiway data analysis, by which the extracted components are nonnegative and usually sparse. Nevertheless, the sparsity is only a side effect and cannot be explicitly controlled without additional regularization. In this paper, we investigated the nonnegative CANDECOMP/PARAFAC (NCP) decomposition with the sparse regularization item using $$l_1$$ l 1 -norm (sparse NCP). When high sparsity is imposed, the factor matrices will contain more zero components and will not be of full column rank. Thus, the sparse NCP is prone to rank deficiency, and the algorithms of sparse NCP may not converge. In this paper, we proposed a novel model of sparse NCP with the proximal algorithm. The subproblems in the new model are strongly convex in the block coordinate descent (BCD) framework. Therefore, the new sparse NCP provides a full column rank condition and guarantees to converge to a stationary point. In addition, we proposed an inexact BCD scheme for sparse NCP, where each subproblem is updated multiple times to speed up the computation. In order to prove the effectiveness and efficiency of the sparse NCP with the proximal algorithm, we employed two optimization algorithms to solve the model, including inexact alternating nonnegative quadratic programming and inexact hierarchical alternating least squares. We evaluated the proposed sparse NCP methods by experiments on synthetic, real-world, small-scale, and large-scale tensor data. The experimental results demonstrate that our proposed algorithms can efficiently impose sparsity on factor matrices, extract meaningful sparse components, and outperform state-of-the-art methods.


2021 ◽  
Author(s):  
Paarth Shah ◽  
Avadesh Meduri ◽  
Wolfgang Merkt ◽  
Majid Khadiv ◽  
Ioannis Havoutis ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document