A clipping dual coordinate descent algorithm for solving support vector machines

2014 ◽  
Vol 71 ◽  
pp. 266-278 ◽  
Author(s):  
Xinjun Peng ◽  
Dongjing Chen ◽  
Lingyan Kong
Author(s):  
Bin Gu ◽  
Yingying Shan ◽  
Xiang Geng ◽  
Guansheng Zheng

Support vector machines play an important role in machine learning in the last two decades. Traditional SVM solvers (e.g. LIBSVM) are not scalable in the current big data era. Recently, a state of the art solver was proposed based on the asynchronous greedy coordinate descent (AsyGCD) algorithm. However, AsyGCD is still not scalable enough, and is limited to binary classification. To address these issues, in this paper we propose an asynchronous accelerated greedy coordinate descent algorithm (AsyAGCD) for  SVMs. Compared with AsyGCD, our AsyAGCD has the following two-fold advantages: 1) our AsyAGCD is an accelerated version of AsyGCD because active set strategy is used. Specifically, our AsyAGCD can converge much faster  than AsyGCD for the second half of iterations. 2) Our AsyAGCD can handle more SVM formulations (including binary classification and regression SVMs) than AsyGCD. We provide the comparison of computational complexity of AsyGCD and our AsyAGCD. Experiment results on a variety of datasets and learning applications confirm that our AsyAGCD is much faster than the existing SVM solvers (including AsyGCD).


2011 ◽  
Vol 22 (08) ◽  
pp. 1761-1779 ◽  
Author(s):  
CYRIL ALLAUZEN ◽  
CORINNA CORTES ◽  
MEHRYAR MOHRI

This paper presents a novel application of automata algorithms to machine learning. It introduces the first optimization solution for support vector machines used with sequence kernels that is purely based on weighted automata and transducer algorithms, without requiring any specific solver. The algorithms presented apply to a family of kernels covering all those commonly used in text and speech processing or computational biology. We show that these algorithms have significantly better computational complexity than previous ones and report the results of large-scale experiments demonstrating a dramatic reduction of the training time, typically by several orders of magnitude.


Sign in / Sign up

Export Citation Format

Share Document