support vector machines
Recently Published Documents


TOTAL DOCUMENTS

8077
(FIVE YEARS 943)

H-INDEX

164
(FIVE YEARS 16)

2022 ◽  
Vol 308 ◽  
pp. 118338
Author(s):  
Navid Bayati ◽  
Ebrahim Balouji ◽  
Hamid Reza Baghaee ◽  
Amin Hajizadeh ◽  
Mohsen Soltani ◽  
...  

2022 ◽  
Author(s):  
Raj Bridgelall

Abstract The aim of this tutorial is to help students grasp the theory and applicability of support vector machines (SVMs). The contribution is an intuitive style tutorial that helped students gain insights into SVM from a unique perspective. An internet search will reveal many videos and articles on SVM, but free peer-reviewed tutorials are generally not available or are incomplete. Instructional materials that provide simplified explanations of SVM leave gaps in the derivations that beginning students cannot fill. Most of the free tutorials also lack guidance on practical applications and considerations. The software wrappers in many modern programming libraries of Python and R currently hide the operational complexities. Such software tools often use default parameters that ignore domain knowledge or leave knowledge gaps about the important effects of SVM hyperparameters, resulting in misuse and subpar outcomes. The author uses this tutorial as a course reference for students studying artificial intelligence and machine learning. The tutorial derives the classic SVM classifier from first principles and then derives the practical form that a computer uses to train a classification model. An intuitive explanation about confusion matrices, F1 score, and the AUC metric extend insights into the inherent tradeoff between sensitivity and specificity. A discussion about cross-validation provides a basic understanding of how to select and tune the hyperparameters to maximize generalization by balancing underfitting and overfitting. Even seasoned self-learners with advanced statistical backgrounds have gained insights from this tutorial style of intuitive explanations, with all related considerations for tuning and performance evaluations in one place.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractIn this chapter, the support vector machines (svm) methods are studied. We first point out the origin and popularity of these methods and then we define the hyperplane concept which is the key for building these methods. We derive methods related to svm: the maximum margin classifier and the support vector classifier. We describe the derivation of the svm along with some kernel functions that are fundamental for building the different kernels methods that are allowed in svm. We explain how the svm for binary response variables can be expanded for categorical response variables and give examples of svm for binary and categorical response variables with plant breeding data for genomic selection. Finally, general issues for adopting the svm methodology for continuous response variables are provided, and some examples of svm for continuous response variables for genomic prediction are described.


Mathematics ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 128
Author(s):  
Güvenç Arslan ◽  
Uğur Madran ◽  
Duygu Soyoğlu

In this note, we propose a novel classification approach by introducing a new clustering method, which is used as an intermediate step to discover the structure of a data set. The proposed clustering algorithm uses similarities and the concept of a clique to obtain clusters, which can be used with different strategies for classification. This approach also reduces the size of the training data set. In this study, we apply support vector machines (SVMs) after obtaining clusters with the proposed clustering algorithm. The proposed clustering algorithm is applied with different strategies for applying SVMs. The results for several real data sets show that the performance is comparable with the standard SVM while reducing the size of the training data set and also the number of support vectors.


Sign in / Sign up

Export Citation Format

Share Document