scholarly journals Multi-class classification for large datasets with optimized SVM by non-linear kernel function

2021 ◽  
Vol 2089 (1) ◽  
pp. 012015
Author(s):  
Lingam Sunitha ◽  
M Bal Raju

Abstract Most important part of Support Vector Machines(SVM) are the kernels. Although there are several widely used kernel functions, a carefully designed kernel will help improve the accuracy of SVM. The proposed work aims to develop a new kernel function for a multi-class support vector machine, perform experiments on various data sets, and compare them with other classification methods. Directly it is not possible multiclass classification with SVM. In this proposed work first designed a model for binary class then extended with the one-verses-all approach. Experimental results have proved the efficiency of the new kernel function. The proposed kernel reduces misclassification and time. Other classification methods observed better results for some data sets collected from the UCI repository.

2019 ◽  
Vol 5 (2) ◽  
pp. 90-99
Author(s):  
Putroue Keumala Intan

The maternal mortality rate during childbirth can be reduced through the efforts of the medical team in determining the childbirth process that must be undertaken immediately. Machine learning in terms of classifying childbirth can be a solution for the medical team in determining the childbirth process. One of the classification methods that can be used is the Support Vector Machine (SVM) method which is able to determine a hyperplane that will form a good decision boundary so that it is able to classify data appropriately. In SVM, there is a kernel function that is useful for solving non-linear classification cases by transforming data to a higher dimension. In this study, four kernel functions will be used; Linear, Radial Basis Function (RBF), Polynomial, and Sigmoid in the classification process of childbirth in order to determine the kernel function that is capable of producing the highest accuracy value. Based on research that has been done, it is obtained that the accuracy value generated by SVM with linear kernel functions is higher than the other kernel functions.


Author(s):  
Edison A. Roxas ◽  
◽  
Ryan Rhay P. Vicerra ◽  
Laurence A. Gan Lim ◽  
Elmer P. Dadios ◽  
...  

The focus of this paper is to explore the use of kernel combinations of the support vector machines (SVMs) for vehicle classification. Being the primary component of the SVM, the kernel functions are responsible for the pattern analysis of the vehicle dataset and to bridge its linear and non-linear features. However, the choice of the type of kernel functions has characteristics and limitations that are highly dependent on the parameters. Thus, in order to overcome these limitations, a method of compounding kernel function for vehicle classification is hereby introduced and discussed. The vehicle classification accuracy of the compound kernel function presented is then compared to the accuracies of the conventional classifications obtained from the four commonly used individual kernel functions (linear, quadratic, cubic, and Gaussian functions). This study provides the following contributions: (1) The classification method is able to determine the rank in terms of accuracies of the four individual kernel functions; (2) The method is able to combine the top three individual kernel functions; and (3) The best combination of the compound kernel functions can be determined.


2008 ◽  
pp. 1269-1279
Author(s):  
Xiuju Fu ◽  
Lipo Wang ◽  
GihGuang Hung ◽  
Liping Goh

Classification decisions from linguistic rules are more desirable compared to complex mathematical formulas from support vector machine (SVM) classifiers due to the explicit explanation capability of linguistic rules. Linguistic rule extraction has been attracting much attention in explaining knowledge hidden in data. In this chapter, we show that the decisions from an SVM classifier can be decoded into linguistic rules based on the information provided by support vectors and decision function. Given a support vector of a certain class, cross points between each line, which is extended from the support vector along each axis, and an SVM decision hyper-curve are searched first. A hyper-rectangular rule is derived from these cross points. The hyper-rectangle is tuned by a tuning phase in order to exclude those out-class data points. Finally, redundant rules are merged to produce a compact rule set. Simultaneously, important attributes could be highlighted in the extracted rules. Rule extraction results from our proposed method could follow SVM classifier decisions very well. We compare the rule extraction results from SVM with RBF kernel function and linear kernel function. Experiment results show that rules extracted from SVM with RBF nonlinear kernel function are with better accuracy than rules extracted from SVM with linear kernel function. Comparisons between our method and other rule extraction methods are also carried out on several benchmark data sets. Higher rule accuracy is obtained in our method with fewer number of premises in each rule.


Author(s):  
Xiuju Fu ◽  
Lipo Wang ◽  
GihGuang Hung ◽  
Liping Goh

Classification decisions from linguistic rules are more desirable compared to complex mathematical formulas from support vector machine (SVM) classifiers due to the explicit explanation capability of linguistic rules. Linguistic rule extraction has been attracting much attention in explaining knowledge hidden in data. In this chapter, we show that the decisions from an SVM classifier can be decoded into linguistic rules based on the information provided by support vectors and decision function. Given a support vector of a certain class, cross points between each line, which is extended from the support vector along each axis, and an SVM decision hyper-curve are searched first. A hyper-rectangular rule is derived from these cross points. The hyper-rectangle is tuned by a tuning phase in order to exclude those out-class data points. Finally, redundant rules are merged to produce a compact rule set. Simultaneously, important attributes could be highlighted in the extracted rules. Rule extraction results from our proposed method could follow SVM classifier decisions very well. We compare the rule extraction results from SVM with RBF kernel function and linear kernel function. Experiment results show that rules extracted from SVM with RBF nonlinear kernel function are with better accuracy than rules extracted from SVM with linear kernel function. Comparisons between our method and other rule extraction methods are also carried out on several benchmark data sets. Higher rule accuracy is obtained in our method with fewer number of premises in each rule.


2020 ◽  
Vol 1 (1) ◽  
pp. 37-41
Author(s):  
Noramalina Mohd Hatta ◽  
Zuraini Ali Shah ◽  
Shahreen Kasim

Multiclass cancer classification is basically one of the challenging fields in machine learning which a fast growing technology that use human behaviour as examples. Supervised classification such Support Vector Machine (SVM) has been used to classify the dataset on classification by its own function and merely known as kernel function. Kernel function has stated to have a problem especially in selecting their best kernels based on a specific datasets and tasks. Besides, there is an issue stated that the kernels function have a high impossibility to distribute the data in straight line. Here, three basic kernel functions was used and tested with selected dataset and they are linear kernel, polynomial kernel and Radial Basis Function (RBF) kernel function. The three kernels were tested by different dataset to gain the accuracy. For a comparison, this study conducting a test by with and without feature selection in SVM classification kernel function since both tests will give different result and thus give a big meaning to the study.


2017 ◽  
Vol 28 (02) ◽  
pp. 1750015 ◽  
Author(s):  
M. Andrecut

The least-squares support vector machine (LS-SVM) is a frequently used kernel method for non-linear regression and classification tasks. Here we discuss several approximation algorithms for the LS-SVM classifier. The proposed methods are based on randomized block kernel matrices, and we show that they provide good accuracy and reliable scaling for multi-class classification problems with relatively large data sets. Also, we present several numerical experiments that illustrate the practical applicability of the proposed methods.


2012 ◽  
Vol 24 (4) ◽  
pp. 1047-1084 ◽  
Author(s):  
Xiao-Tong Yuan ◽  
Shuicheng Yan

We investigate Newton-type optimization methods for solving piecewise linear systems (PLSs) with nondegenerate coefficient matrix. Such systems arise, for example, from the numerical solution of linear complementarity problem, which is useful to model several learning and optimization problems. In this letter, we propose an effective damped Newton method, PLS-DN, to find the exact (up to machine precision) solution of nondegenerate PLSs. PLS-DN exhibits provable semiiterative property, that is, the algorithm converges globally to the exact solution in a finite number of iterations. The rate of convergence is shown to be at least linear before termination. We emphasize the applications of our method in modeling, from a novel perspective of PLSs, some statistical learning problems such as box-constrained least squares, elitist Lasso (Kowalski & Torreesani, 2008 ), and support vector machines (Cortes & Vapnik, 1995 ). Numerical results on synthetic and benchmark data sets are presented to demonstrate the effectiveness and efficiency of PLS-DN on these problems.


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Jinshan Qi ◽  
Xun Liang ◽  
Rui Xu

By utilizing kernel functions, support vector machines (SVMs) successfully solve the linearly inseparable problems. Subsequently, its applicable areas have been greatly extended. Using multiple kernels (MKs) to improve the SVM classification accuracy has been a hot topic in the SVM research society for several years. However, most MK learning (MKL) methods employ L1-norm constraint on the kernel combination weights, which forms a sparse yet nonsmooth solution for the kernel weights. Alternatively, the Lp-norm constraint on the kernel weights keeps all information in the base kernels. Nonetheless, the solution of Lp-norm constraint MKL is nonsparse and sensitive to the noise. Recently, some scholars presented an efficient sparse generalized MKL (L1- and L2-norms based GMKL) method, in which L1  L2 established an elastic constraint on the kernel weights. In this paper, we further extend the GMKL to a more generalized MKL method based on the p-norm, by joining L1- and Lp-norms. Consequently, the L1- and L2-norms based GMKL is a special case in our method when p=2. Experiments demonstrated that our L1- and Lp-norms based MKL offers a higher accuracy than the L1- and L2-norms based GMKL in the classification, while keeping the properties of the L1- and L2-norms based on GMKL.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Rakesh Patra ◽  
Sujan Kumar Saha

Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.


Sign in / Sign up

Export Citation Format

Share Document