A MULTI-CLASS SUPPORT VECTOR MACHINE: THEORY AND MODEL

2013 ◽  
Vol 12 (06) ◽  
pp. 1175-1199 ◽  
Author(s):  
MINGHE SUN

A multi-class support vector machine (M-SVM) is developed, its dual is derived, its dual is mapped to high dimensional feature spaces using inner product kernels, and its performance is tested. The M-SVM is formulated as a quadratic programming model. Its dual, also a quadratic programming model, is very elegant and is easier to solve than the primal. The discriminant functions can be directly constructed from the dual solution. By using inner product kernels, the M-SVM can be built and nonlinear discriminant functions can be constructed in high dimensional feature spaces without carrying out the mappings from the input space to the feature spaces. The size of the dual, measured by the number of variables and constraints, is independent of the dimension of the input space and stays the same whether the M-SVM is built in the input space or in a feature space. Compared to other models published in the literature, this M-SVM is equally or more effective. An example is presented to demonstrate the dual formulation and solution in feature spaces. Very good results were obtained on benchmark test problems from the literature.

Author(s):  
Minghe Sun

As machine learning techniques, support vector machines are quadratic programming models and are recent revolutionary development for classification analysis. Primal and dual formulations of support vector machine models for both two-class and multi-class classification are discussed. The dual formulations in high dimensional feature space using inner product kernels are emphasized. Nonlinear classification function or discriminant functions in high dimensional feature spaces can be constructed through the use of inner product kernels without actually mapping the data from the input space to the high dimensional feature spaces. Furthermore, the size of the dual formulation is independent of the dimension of the input space and independent of the kernels used. Two illustrative examples, one for two-class and the other for multi-class classification, are used to demonstrate the formulations of these SVM models.


2021 ◽  
Vol 8 (1) ◽  
pp. 27-36
Author(s):  
Raquel Serna-Diaz ◽  
Raimundo Santos Leite ◽  
Paulo J. S. Silva

2016 ◽  
Vol 25 (3) ◽  
pp. 417-429
Author(s):  
Chong Wu ◽  
Lu Wang ◽  
Zhe Shi

AbstractFor the financial distress prediction model based on support vector machine, there are no theories concerning how to choose a proper kernel function in a data-dependent way. This paper proposes a method of modified kernel function that can availably enhance classification accuracy. We apply an information-geometric method to modifying a kernel that is based on the structure of the Riemannian geometry induced in the input space by the kernel. A conformal transformation of a kernel from input space to higher-dimensional feature space enlarges volume elements locally near support vectors that are situated around the classification boundary and reduce the number of support vectors. This paper takes the Gaussian radial basis function as the internal kernel. Additionally, this paper combines the above method with the theories of standard regularization and non-dimensionalization to construct the new model. In the empirical analysis section, the paper adopts the financial data of Chinese listed companies. It uses five groups of experiments with different parameters to compare the classification accuracy. We can make the conclusion that the model of modified kernel function can effectively reduce the number of support vectors, and improve the classification accuracy.


2020 ◽  
Vol 16 (10) ◽  
pp. 155014772096383
Author(s):  
Yan Qiao ◽  
Xinhong Cui ◽  
Peng Jin ◽  
Wu Zhang

This article addresses the problem of outlier detection for wireless sensor networks. As increasing amounts of observational data are tending to be high-dimensional and large scale, it is becoming increasingly difficult for existing techniques to perform outlier detection accurately and efficiently. Although dimensionality reduction tools (such as deep belief network) have been utilized to compress the high-dimensional data to support outlier detection, these methods may not achieve the desired performance due to the special distribution of the compressed data. Furthermore, because most existed classification methods must solve a quadratic optimization problem in their training stage, they cannot perform well in large-scale datasets. In this article, we developed a new form of classification model called “deep belief network online quarter-sphere support vector machine,” which combines deep belief network with online quarter-sphere one-class support vector machine. Based on this model, we first propose a model training method that learns the radius of the quarter sphere by a sorting method. Then, an online testing method is proposed to perform online outlier detection without supervision. Finally, we compare the proposed method with the state of the arts using extensive experiments. The experimental results show that our method not only reduces the computational cost by three orders of magnitude but also improves the detection accuracy by 3%–5%.


Sign in / Sign up

Export Citation Format

Share Document