scholarly journals A geometric approach to Support Vector Machine (SVM) classification

2006 ◽  
Vol 17 (3) ◽  
pp. 671-682 ◽  
Author(s):  
M.E. Mavroforakis ◽  
S. Theodoridis
2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Andronicus A. Akinyelu ◽  
Aderemi O. Adewumi

Support vector machine (SVM) is one of the top picks in pattern recognition and classification related tasks. It has been used successfully to classify linearly separable and nonlinearly separable data with high accuracy. However, in terms of classification speed, SVMs are outperformed by many machine learning algorithms, especially, when massive datasets are involved. SVM classification speed scales linearly with number of support vectors, and support vectors increase with increase in dataset size. Hence, SVM classification speed can be enormously reduced if it is trained on a reduced dataset. Instance selection techniques are one of the most effective techniques suitable for minimizing SVM training time. In this study, two instance selection techniques suitable for identifying relevant training instances are proposed. The techniques are evaluated on a dataset containing 4000 emails and results obtained compared to other existing techniques. Result reveals excellent improvement in SVM classification speed.


2014 ◽  
Vol 615 ◽  
pp. 194-197
Author(s):  
Zhen Yuan Tu ◽  
Fang Hua Ning ◽  
Wu Jia Yu

In practice, it is difficult for Support Vector Machine (SVM) to have a relatively high recognition rate as well as a quite fast recognition speed. In order to resolve this defect, in this paper we build a SVM classification model combining numerical characteristics. We use readings of rotary natural meters as the test temple, do positioning, preprocessing, feature points extracting, classifying and other series of operations to the numeric region of the dial. Then with the idea of cross-validation, we keep doing parameter optimation to SVM. At last, after making a comprehensive contrast of the effects which numerous performance factors make on the experimental outputs, we try to give our explanation of the outputs from different perspectives.


2019 ◽  
Vol 18 (1) ◽  
Author(s):  
Muhammad Fathurrohman ◽  
R. Lulus Lambang G. H ◽  
Didik Djoko Susilo

<p><em>Bearings are the critical part of any rotating machine. The catastrophic failure of the bearing can lead to fatal and harmful to the operation of the machine. Therefore, predictive maintenance based on condition monitoring of bearing is very important. The objective of this research is to apply Support Vector Machine (SVM) method for fault diagnosis of the ball bearing. The research was carried out at the bearing test rig. Four types of ball bearing condition, such as normal, inner race defect, ball defect, and outer race defect were measured of the vibration signals using data acquisition with a sampling frequency of 20 kHz at the constant speed of 1400 RPM. Various features were extracted from vibration signals in time domain, such as RMS, variance, standard deviation, crest factor, shape factor, skewness, kurtosis, log energy entropy and sure entropy. PCA transformation was employed to reduce the dimension of feature extracted data. SVM classification problems were solved using MATLAB 2016a. The results showed that the application of RBF kernel function with the C parameter =1 was the best configuration. The training model accuracy was 98.93% and the testing accuracy of SVM was 97.5%. Finally, the research results show that the SVM classification method can be used to diagnose the fault condition of the ball bearing.</em><em>.</em></p>


Author(s):  
Manju Bala ◽  
R. K. Agrawal

The choice of kernel function and its parameter is very important for better performance of support vector machine. In this chapter, the authors proposed few new kernel functions which satisfy the Mercer’s conditions and a robust algorithm to automatically determine the suitable kernel function and its parameters based on AdaBoost to improve the performance of support vector machine. The performance of proposed algorithm is evaluated on several benchmark datasets from UCI repository. The experimental results for different datasets show that the Gaussian kernel is not always the best choice to achieve high generalization of support vector machine classifier. However, with the proper choice of kernel function and its parameters using proposed algorithm, it is possible to achieve maximum classification accuracy for all datasets.


RSC Advances ◽  
2015 ◽  
Vol 5 (61) ◽  
pp. 49195-49203 ◽  
Author(s):  
Ting-Ting Yao ◽  
Jing-Li Cheng ◽  
Bing-Rong Xu ◽  
Min-Zhe Zhang ◽  
Yong-Zhou Hu ◽  
...  

A novel SVM classification model was constructed and applied in the development of novel tetronic acid derivatives as potent insecticidal and acaricidal agents.


2012 ◽  
Vol 241-244 ◽  
pp. 1629-1632 ◽  
Author(s):  
Yan Yue

Studies propose to combine standard SVM classification with the information entropy to increase SVM classification rate as well as reduce computational load of SVM testing. The algorithm uses the information entropy theory to per-treat samples’ attributes, and can eliminate some attributes which put small impacts on the date classification by introducing the reduction coefficient, and then reduce the amount of support vectors. The results show that this algorithm can reduce the amount of support vectors in the process of the classification with support vector machine, and heighten the recognition rate when the amount of the samples is larger compared to standard SVM and DAGSVM.


Sign in / Sign up

Export Citation Format

Share Document