coordinate descent
Recently Published Documents


TOTAL DOCUMENTS

472
(FIVE YEARS 153)

H-INDEX

39
(FIVE YEARS 5)

Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 113
Author(s):  
Rafał Zdunek ◽  
Krzysztof Fonał

Nonnegative Tucker decomposition (NTD) is a robust method used for nonnegative multilinear feature extraction from nonnegative multi-way arrays. The standard version of NTD assumes that all of the observed data are accessible for batch processing. However, the data in many real-world applications are not static or are represented by a large number of multi-way samples that cannot be processing in one batch. To tackle this problem, a dynamic approach to NTD can be explored. In this study, we extend the standard model of NTD to an incremental or online version, assuming volatility of observed multi-way data along one mode. We propose two computational approaches for updating the factors in the incremental model: one is based on the recursive update model, and the other uses the concept of the block Kaczmarz method that belongs to coordinate descent methods. The experimental results performed on various datasets and streaming data demonstrate high efficiently of both algorithmic approaches, with respect to the baseline NTD methods.


Author(s):  
Guanhua Ye ◽  
Hongzhi Yin ◽  
Tong Chen ◽  
Miao Xu ◽  
Quoc Viet Hung Nguyen ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2966
Author(s):  
Zhi Quan ◽  
Yingying Zhang ◽  
Jie Liu ◽  
Yao Wang

In this paper, we devise an efficient approach for estimating the direction of arrival (DoA). The proposed DoA estimation approach is based on minimum variance distortionless response (MVDR) criteria within a recursive least squares (RLS) framework. The dichotomous coordinate descent algorithm is used to modify the calculation of the output power spectrum, and a diagonal loading term is applied to improve the robustness of the DoA estimator. These modifications allow us to both reduce the computational complexity of the RLS DoA estimator and increase the estimation performance. A numerical comparison confirms that the proposed DoA estimator outperforms the conventional RLS DoA estimator in terms of the computational complexity and DoA estimation error. Finally, the proposed theoretical DoA estimator is implemented on a field-programmable gate array (FPGA) board to verify the feasibility of the method. The numerical results of a fixed-point implementation demonstrate that the performance of the proposed method is very close to that of its floating-point counterpart.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7732
Author(s):  
Azam Khalili ◽  
Vahid Vahidpour ◽  
Amir Rastegarnia ◽  
Ali Farzamnia ◽  
Kenneth Teo Tze Kin ◽  
...  

The incremental least-mean-square (ILMS) algorithm is a useful method to perform distributed adaptation and learning in Hamiltonian networks. To implement the ILMS algorithm, each node needs to receive the local estimate of the previous node on the cycle path to update its own local estimate. However, in some practical situations, perfect data exchange may not be possible among the nodes. In this paper, we develop a new version of ILMS algorithm, wherein in its adaptation step, only a random subset of the coordinates of update vector is available. We draw a comparison between the proposed coordinate-descent incremental LMS (CD-ILMS) algorithm and the ILMS algorithm in terms of convergence rate and computational complexity. Employing the energy conservation relation approach, we derive closed-form expressions to describe the learning curves in terms of excess mean-square-error (EMSE) and mean-square deviation (MSD). We show that, the CD-ILMS algorithm has the same steady-state error performance compared with the ILMS algorithm. However, the CD-ILMS algorithm has a faster convergence rate. Numerical examples are given to verify the efficiency of the CD-ILMS algorithm and the accuracy of theoretical analysis.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012012
Author(s):  
Song Yao ◽  
Lipeng Cui ◽  
Sining Ma

Abstract In recent years, the sparse model is a research hotspot in the field of artificial intelligence. Since the Lasso model ignores the group structure among variables, and can only achieve the selection of scattered variables. Besides, Group Lasso can only select groups of variables. To address this problem, the Sparse Group Log Ridge model is proposed, which can select both groups of variables and variables in one group. Then the MM algorithm combined with the block coordinate descent algorithm can be used for solving. Finally, the advantages of the model in terms of variables selection and prediction are shown through the experiment.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Melisew Tefera Belachew

Determining the number of clusters in high-dimensional real-life datasets and interpreting the final outcome are among the challenging problems in data science. Discovering the number of classes in cancer and microarray data plays a vital role in the treatment and diagnosis of cancers and other related diseases. Nonnegative matrix factorization (NMF) plays a paramount role as an efficient data exploratory tool for extracting basis features inherent in massive data. Some algorithms which are based on incorporating sparsity constraints in the nonconvex NMF optimization problem are applied in the past for analyzing microarray datasets. However, to the best of our knowledge, none of these algorithms use block coordinate descent method which is known for providing closed form solutions. In this paper, we apply an algorithm developed based on columnwise partitioning and rank-one matrix approximation. We test this algorithm on two well-known cancer datasets: leukemia and multiple myeloma. The numerical results indicate that the proposed algorithm performs significantly better than related state-of-the-art methods. In particular, it is shown that this method is capable of robust clustering and discovering larger cancer classes in which the cluster splits are stable.


Author(s):  
Deqing Wang ◽  
Zheng Chang ◽  
Fengyu Cong

AbstractNonnegative tensor decomposition is a versatile tool for multiway data analysis, by which the extracted components are nonnegative and usually sparse. Nevertheless, the sparsity is only a side effect and cannot be explicitly controlled without additional regularization. In this paper, we investigated the nonnegative CANDECOMP/PARAFAC (NCP) decomposition with the sparse regularization item using $$l_1$$ l 1 -norm (sparse NCP). When high sparsity is imposed, the factor matrices will contain more zero components and will not be of full column rank. Thus, the sparse NCP is prone to rank deficiency, and the algorithms of sparse NCP may not converge. In this paper, we proposed a novel model of sparse NCP with the proximal algorithm. The subproblems in the new model are strongly convex in the block coordinate descent (BCD) framework. Therefore, the new sparse NCP provides a full column rank condition and guarantees to converge to a stationary point. In addition, we proposed an inexact BCD scheme for sparse NCP, where each subproblem is updated multiple times to speed up the computation. In order to prove the effectiveness and efficiency of the sparse NCP with the proximal algorithm, we employed two optimization algorithms to solve the model, including inexact alternating nonnegative quadratic programming and inexact hierarchical alternating least squares. We evaluated the proposed sparse NCP methods by experiments on synthetic, real-world, small-scale, and large-scale tensor data. The experimental results demonstrate that our proposed algorithms can efficiently impose sparsity on factor matrices, extract meaningful sparse components, and outperform state-of-the-art methods.


2021 ◽  
Author(s):  
Smaglichenko Tatyana ◽  
Smaglichenko Alexander ◽  
Sayankina Maria ◽  
Chigarev Boris

Sign in / Sign up

Export Citation Format

Share Document