A BOUNDARY METHOD TO SPEED UP TRAINING SUPPORT VECTOR MACHINES

2007 ◽  
pp. 1209-1213 ◽  
Author(s):  
Y. Wang ◽  
C.G. Zhou ◽  
Y.X. Huang ◽  
Y.C. Liang ◽  
X.W. Yang
2012 ◽  
Vol 86 ◽  
pp. 193-198 ◽  
Author(s):  
Yun Yang ◽  
Qiaochu He ◽  
Xiaolin Hu

2020 ◽  
Vol 26 (3) ◽  
pp. 42-53
Author(s):  
Vuk Vranjkovic ◽  
Rastislav Struharik

In this paper, a hardware accelerator for sparse support vector machines (SVM) is proposed. We believe that the proposed accelerator is the first accelerator of this kind. The accelerator is designed for use in field programmable gate arrays (FPGA) systems. Additionally, a novel algorithm for the pruning of SVM models is developed. The pruned SVM model has a smaller memory footprint and can be processed faster compared to dense SVM models. In the systems with memory throughput, compute or power constraints, such as edge computing, this can be a big advantage. The experiments on several standard datasets are conducted, which aim is to compare the efficiency of the proposed architecture and the developed algorithm to the existing solutions. The results of the experiments reveal that the proposed hardware architecture and SVM pruning algorithm has superior characteristics in comparison to the previous work in the field. A memory reduction from 3 % to 85 % is achieved, with a speed-up in a range from 1.17 to 7.92.


Sign in / Sign up

Export Citation Format

Share Document