Support Vector Machines I: The Support Vector Classifier

NIR news ◽  
2004 ◽  
Vol 15 (5) ◽  
pp. 14-15 ◽  
Author(s):  
Tom Fearn
Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractIn this chapter, the support vector machines (svm) methods are studied. We first point out the origin and popularity of these methods and then we define the hyperplane concept which is the key for building these methods. We derive methods related to svm: the maximum margin classifier and the support vector classifier. We describe the derivation of the svm along with some kernel functions that are fundamental for building the different kernels methods that are allowed in svm. We explain how the svm for binary response variables can be expanded for categorical response variables and give examples of svm for binary and categorical response variables with plant breeding data for genomic selection. Finally, general issues for adopting the svm methodology for continuous response variables are provided, and some examples of svm for continuous response variables for genomic prediction are described.


2019 ◽  
Vol 8 (4) ◽  
pp. 5160-5165

Feature selection is a powerful tool to identify the important characteristics of data for prediction. Feature selection, therefore, can be a tool for avoiding overfitting, improving prediction accuracy and reducing execution time. The applications of feature selection procedures are particularly important in Support vector machines, which is used for prediction in large datasets. The larger the dataset, the more computationally exhaustive and challenging it is to build a predictive model using the support vector classifier. This paper investigates how the feature selection approach based on the analysis of variance (ANOVA) can be optimized for Support Vector Machines (SVMs) to improve its execution time and accuracy. We introduce new conditions on the SVMs prior to running the ANOVA to optimize the performance of the support vector classifier. We also establish the bootstrap procedure as alternative to cross validation to perform model selection. We run our experiments using popular datasets and compare our results to existing modifications of SVMs with feature selection procedure. We propose a number of ANOVA-SVM modifications which are simple to perform, while at the same time, boost significantly the accuracy and computing time of the SVMs in comparison to existing methods like the Mixed Integer Linear Feature Selection approach.


2018 ◽  
Author(s):  
Nelson Marcelo Romero Aquino ◽  
Matheus Gutoski ◽  
Leandro Takeshi Hattori ◽  
Heitor Silvério Lopes

Sign in / Sign up

Export Citation Format

Share Document