active set strategy
Recently Published Documents


TOTAL DOCUMENTS

60
(FIVE YEARS 3)

H-INDEX

15
(FIVE YEARS 0)

Author(s):  
Liping Zhang ◽  
Shouqiang Du

A new exchange method is presented for semi-infinite optimization problems with polyhedron constraints. The basic idea is to use an active set strategy as exchange rule to construct an approximate problem with finitely many constraints at each iteration. Under mild conditions, we prove that the proposed algorithm terminates in a finite number of iterations and guarantees that the solution of the resulting approximate problem at final iteration converges to the solution of the original problem within arbitrarily given tolerance. Numerical results indicate that the proposed algorithm is efficient and promising.


Author(s):  
Bin Gu ◽  
Yingying Shan ◽  
Xiang Geng ◽  
Guansheng Zheng

Support vector machines play an important role in machine learning in the last two decades. Traditional SVM solvers (e.g. LIBSVM) are not scalable in the current big data era. Recently, a state of the art solver was proposed based on the asynchronous greedy coordinate descent (AsyGCD) algorithm. However, AsyGCD is still not scalable enough, and is limited to binary classification. To address these issues, in this paper we propose an asynchronous accelerated greedy coordinate descent algorithm (AsyAGCD) for  SVMs. Compared with AsyGCD, our AsyAGCD has the following two-fold advantages: 1) our AsyAGCD is an accelerated version of AsyGCD because active set strategy is used. Specifically, our AsyAGCD can converge much faster  than AsyGCD for the second half of iterations. 2) Our AsyAGCD can handle more SVM formulations (including binary classification and regression SVMs) than AsyGCD. We provide the comparison of computational complexity of AsyGCD and our AsyAGCD. Experiment results on a variety of datasets and learning applications confirm that our AsyAGCD is much faster than the existing SVM solvers (including AsyGCD).


2017 ◽  
Vol 87 (311) ◽  
pp. 1283-1305 ◽  
Author(s):  
Wanyou Cheng ◽  
Yu-Hong Dai

2014 ◽  
Vol 59 (31) ◽  
pp. 4152-4160 ◽  
Author(s):  
Xiao-Jian Ding ◽  
Bao-Fang Chang

2014 ◽  
Vol 989-994 ◽  
pp. 2398-2401
Author(s):  
Xiao Wei Jiang ◽  
Yue Ting Yang ◽  
Yun Long Lu

A method of multiplier is presented for solving optimization problems. For large-scale constraint problems, combining the active set strategy, we use the aggregate function to approximate the max-value function. Only a few of functions are involved at each iteration, so the computation for gradient is significantly reduced. The numerical results show that the method is effective.


Sign in / Sign up

Export Citation Format

Share Document