scholarly journals A Fast Active Set Block Coordinate Descent Algorithm for $\ell_1$-Regularized Least Squares

2016 ◽  
Vol 26 (1) ◽  
pp. 781-809 ◽  
Author(s):  
Marianna De Santis ◽  
Stefano Lucidi ◽  
Francesco Rinaldi
2021 ◽  
Vol 2078 (1) ◽  
pp. 012012
Author(s):  
Song Yao ◽  
Lipeng Cui ◽  
Sining Ma

Abstract In recent years, the sparse model is a research hotspot in the field of artificial intelligence. Since the Lasso model ignores the group structure among variables, and can only achieve the selection of scattered variables. Besides, Group Lasso can only select groups of variables. To address this problem, the Sparse Group Log Ridge model is proposed, which can select both groups of variables and variables in one group. Then the MM algorithm combined with the block coordinate descent algorithm can be used for solving. Finally, the advantages of the model in terms of variables selection and prediction are shown through the experiment.


Author(s):  
Bin Gu ◽  
Yingying Shan ◽  
Xiang Geng ◽  
Guansheng Zheng

Support vector machines play an important role in machine learning in the last two decades. Traditional SVM solvers (e.g. LIBSVM) are not scalable in the current big data era. Recently, a state of the art solver was proposed based on the asynchronous greedy coordinate descent (AsyGCD) algorithm. However, AsyGCD is still not scalable enough, and is limited to binary classification. To address these issues, in this paper we propose an asynchronous accelerated greedy coordinate descent algorithm (AsyAGCD) for  SVMs. Compared with AsyGCD, our AsyAGCD has the following two-fold advantages: 1) our AsyAGCD is an accelerated version of AsyGCD because active set strategy is used. Specifically, our AsyAGCD can converge much faster  than AsyGCD for the second half of iterations. 2) Our AsyAGCD can handle more SVM formulations (including binary classification and regression SVMs) than AsyGCD. We provide the comparison of computational complexity of AsyGCD and our AsyAGCD. Experiment results on a variety of datasets and learning applications confirm that our AsyAGCD is much faster than the existing SVM solvers (including AsyGCD).


Sign in / Sign up

Export Citation Format

Share Document