scholarly journals Properties of Support Vector Machines

1998 ◽  
Vol 10 (4) ◽  
pp. 955-974 ◽  
Author(s):  
Massimiliano Pontil ◽  
Alessandro Verri

Support vector machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed support vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this article, we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending on only the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m + 1 margin vectors and observe that m + 1 SVs are usually sufficient to determine the decision surface fully. For relatively small m, this latter result leads to a consistent reduction of the SV number.

2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Oliver Kramer

Cascade support vector machines have been introduced as extension of classic support vector machines that allow a fast training on large data sets. In this work, we combine cascade support vector machines with dimensionality reduction based preprocessing. The cascade principle allows fast learning based on the division of the training set into subsets and the union of cascade learning results based on support vectors in each cascade level. The combination with dimensionality reduction as preprocessing results in a significant speedup, often without loss of classifier accuracies, while considering the high-dimensional pendants of the low-dimensional support vectors in each new cascade level. We analyze and compare various instantiations of dimensionality reduction preprocessing and cascade SVMs with principal component analysis, locally linear embedding, and isometric mapping. The experimental analysis on various artificial and real-world benchmark problems includes various cascade specific parameters like intermediate training set sizes and dimensionalities.


Author(s):  
SAEID SANEI

Segmentation of natural textures has been investigated by developing a novel semi-supervised support vector machines (S3VM) algorithm with multiple constraints. Unlike conventional segmentation algorithms the proposed method does not classify the textures but classifies the uniform-texture regions and the regions of boundaries. Also the overall algorithm does not use any training set as used by all other learning algorithms such as conventional SVMs. During the process, the images are restored from high spatial frequency noise. Then various-order statistics of the textures within a sliding two-dimensional window are measured. K-mean algorithm is used to initialise the clustering procedure by labelling part of the class members and the classifier parameters. Therefore at this stage we have both the training and the working sets. A non-linear S3VM is then developed to exploit both sets to classify all the regions. The convex algorithm maximises a defined cost function by incorporating a number of constraints. The algorithm has been applied to combinations of a number of natural textures. It is demonstrated that the algorithm is robust, with negligible misclassification error. However, for complex textures there may be a minor misplacement of the edges.


Author(s):  
ZHAOWEI SHANG ◽  
YUAN YAN TANG ◽  
BIN FANG ◽  
JING WEN ◽  
YAT ZHOU ONG

The fusion of wavelet technique and support vector machines (SVMs) has become an intensive study in recent years. Considering that the wavelet technique is the theoretical foundation of multiresolution analysis (MRA), it is valuable for us to investigate the problem of whether a good performance could be obtained if we combine the MRA with SVMs for signal approximation. Based on the fact that the feature space of SVM and the scale subspace in MRA can be viewed as the same Reproducing Kernel Hilbert Spaces (RKHS), a new algorithm named multiresolution signal decomposition and approximation based on SVM is proposed. The proposed algorithm which approximates the signals hierarchically at different resolutions, possesses better approximation of smoothness for signal than conventional MRA due to using the approximation criterion of the SVM. Experiments illustrate that our algorithm has better approximation of performance than the MRA when being applied to stationary and non-stationary signals.


2016 ◽  
Vol 13 (10) ◽  
pp. 6524-6530 ◽  
Author(s):  
Wenbo Zhang ◽  
Hongbing Ji ◽  
Guisheng Liao ◽  
Zhenzhen Su

To obtain a more overall and accuracy result, a method of probabilistic outputs for multiclass support vector machines based on M-ary classification (POSVM) is proposed in this paper. Comparing with the conventional hard decision multiclass support vector machines (SVMs), the proposed method can reserve almost all the information contained in the samples, which is more beneficial for the post-processing. Furthermore, the conventional one-against-one method requires K(K − 1)/2 SVMs to partition K classes, whereas the proposed method only requires ⌈log2 K⌉ SVMs for the same problems. In addition, for the cases where the classes are of different importance, a weighted SVM (WSVM) based on POSVM is proposed, which can obtain more reasonable results in the pattern recognition applications. The experimental results show that the proposed method provides a more overall and accuracy result than the conventional multiclass SVMs, and that it can be achieved easily by organizing a set of SVMs with a simple structure, especially for the problems with large number of classes. On the other hand, the results of the WSVM method are more logical and efficient than those of the POSVM, and the different classification rules can be designed by changing the weights according to the different cases. Therefore, the complexity of implementing weighted classifiers can be greatly reduced, and the design process is more flexible.


Sign in / Sign up

Export Citation Format

Share Document