Incremental Subclass Support Vector Machine

2019 ◽  
Vol 28 (07) ◽  
pp. 1950020
Author(s):  
Amine Besrour ◽  
Riadh Ksantini

Support Vector Machine (SVM) is a very competitive linear classifier based on convex optimization problem, were support vectors fully describe decision boundary. Hence, SVM is sensitive to data spread and does not take into account the existence of class subclasses, nor minimizes data dispersion for classification performance improvement. Thus, Kernel subclass SVM (KSSVM) was proposed to handle multimodal data and to minimize data dispersion. Nevertheless, KSSVM has difficulties in classifying sequentially obtained data and handling large scale datasets, since it is based on batch learning. For this reason, we propose a novel incremental KSSVM (iKSSVM) which handles dynamic and large data in a proper manner. The iKSSVM is still based on convex optimization problem and minimizes data dispersion within and between data subclasses incrementally, in order to improve discriminative power and classification performance. An extensive comparative evaluation of the iKSSVM to batch KSSVM, as well as, other contemporary incremental classifiers, on real world datasets, has shown clearly its superiority in terms of classification accuracy.

2021 ◽  
Vol 40 (1) ◽  
pp. 1481-1494
Author(s):  
Geng Deng ◽  
Yaoguo Xie ◽  
Xindong Wang ◽  
Qiang Fu

Many classification problems contain shape information from input features, such as monotonic, convex, and concave. In this research, we propose a new classifier, called Shape-Restricted Support Vector Machine (SR-SVM), which takes the component-wise shape information to enhance classification accuracy. There exists vast research literature on monotonic classification covering monotonic or ordinal shapes. Our proposed classifier extends to handle convex and concave types of features, and combinations of these types. While standard SVM uses linear separating hyperplanes, our novel SR-SVM essentially constructs non-parametric and nonlinear separating planes subject to component-wise shape restrictions. We formulate SR-SVM classifier as a convex optimization problem and solve it using an active-set algorithm. The approach applies basis function expansions on the input and effectively utilizes the standard SVM solver. We illustrate our methodology using simulation and real world examples, and show that SR-SVM improves the classification performance with additional shape information of input.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Qing Wu ◽  
Wenqing Wang

Support vector machine (SVM) has been applied very successfully in a variety of classification systems. We attempt to solve the primal programming problems of SVM by converting them into smooth unconstrained minimization problems. In this paper, a new twice continuously differentiable piecewise-smooth function is proposed to approximate the plus function, and it issues a piecewise-smooth support vector machine (PWSSVM). The novel method can efficiently handle large-scale and high dimensional problems. The theoretical analysis demonstrates its advantages in efficiency and precision over other smooth functions. PWSSVM is solved using the fast Newton-Armijo algorithm. Experimental results are given to show the training speed and classification performance of our approach.


2007 ◽  
Vol 19 (5) ◽  
pp. 1155-1178 ◽  
Author(s):  
Olivier Chapelle

Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.


2020 ◽  
Vol 34 (04) ◽  
pp. 6981-6988
Author(s):  
Zhou Zhai ◽  
Bin Gu ◽  
Xiang Li ◽  
Heng Huang

Robust support vector machine (RSVM) has been shown to perform remarkably well to improve the generalization performance of support vector machine under the noisy environment. Unfortunately, in order to handle the non-convexity induced by ramp loss in RSVM, existing RSVM solvers often adopt the DC programming framework which is computationally inefficient for running multiple outer loops. This hinders the application of RSVM to large-scale problems. Safe sample screening that allows for the exclusion of training samples prior to or early in the training process is an effective method to greatly reduce computational time. However, existing safe sample screening algorithms are limited to convex optimization problems while RSVM is a non-convex problem. To address this challenge, in this paper, we propose two safe sample screening rules for RSVM based on the framework of concave-convex procedure (CCCP). Specifically, we provide screening rule for the inner solver of CCCP and another rule for propagating screened samples between two successive solvers of CCCP. To the best of our knowledge, this is the first work of safe sample screening to a non-convex optimization problem. More importantly, we provide the security guarantee to our sample screening rules to RSVM. Experimental results on a variety of benchmark datasets verify that our safe sample screening rules can significantly reduce the computational time.


2020 ◽  
Vol 39 (3) ◽  
pp. 3749-3767
Author(s):  
Ting Ke ◽  
Min Li ◽  
Lidong Zhang ◽  
Hui Lv ◽  
Xuechun Ge

In some real applications, only limited labeled positive examples and many unlabeled examples are available, but there are no negative examples. Such learning is termed as positive and unlabeled (PU) learning. PU learning algorithm has been studied extensively in recent years. However, the classical ones based on the Support Vector Machines (SVMs) are assumed that labeled positive data is independent and identically distributed (i.i.d) and the sample size is large enough. It leads to two obvious shortcomings. On the one hand, the performance is not satisfactory, especially when the number of the labeled positive examples is small. On the other hand, classification results are not optimistic when datasets are Non-i.i.d. For this reason, this paper proposes a novel SVM classifier using Chebyshev distance to measure the empirical risk and designs an efficient iterative algorithm, named L∞ - BSVM in short. L∞ - BSVM includes the following merits: (1) it allows all sample points to participate in learning to prompt classification performance, especially in the case where the size of labeled data is small; (2) it minimizes the distance of the sample points that are (outliers in Non-i.i.d) farthest from the hyper-plane, where outliers are sufficiently taken into consideration (3) our iterative algorithm can solve large scale optimization problem with low time complexity and ensure the convergence of the optimum solution. Finally, extensive experiments on three types of datasets: artificial Non-i.i.d datasets, fault diagnosis of railway turnout with few labeled data (abnormal turnout) and six benchmark real-world datasets verify above opinions again and demonstrate that our classifier is much better than state-of-the-art competitors, such as B-SVM, LUHC, Pulce, B-LSSVM, NB and so on.


2011 ◽  
Vol 181-182 ◽  
pp. 830-835
Author(s):  
Min Song Li

Latent Semantic Indexing(LSI) is an effective feature extraction method which can capture the underlying latent semantic structure between words in documents. However, it is probably not the most appropriate for text categorization to use the method to select feature subspace, since the method orders extracted features according to their variance,not the classification power. We proposed a method based on support vector machine to extract features and select a Latent Semantic Indexing that be suited for classification. Experimental results indicate that the method improves classification performance with more compact representation.


2016 ◽  
Vol 79 (1) ◽  
Author(s):  
Suhail Khokhar ◽  
A. A. Mohd Zin ◽  
M. A. Bhayo ◽  
A. S. Mokhtar

The monitoring of power quality (PQ) disturbances in a systematic and automated way is an important issue to prevent detrimental effects on power system. The development of new methods for the automatic recognition of single and hybrid PQ disturbances is at present a major concern. This paper presents a combined approach of wavelet transform based support vector machine (WT-SVM) for the automatic classification of single and hybrid PQ disturbances. The proposed approach is applied by using synthetic models of various single and hybrid PQ signals. The suitable features of the PQ waveforms were first extracted by using discrete wavelet transform. Then SVM classifies the type of PQ disturbances based on these features. The classification performance of the proposed algorithm is also compared with wavelet based radial basis function neural network, probabilistic neural network and feed-forward neural network. The experimental results show that the recognition rate of the proposed WT-SVM based classification system is more accurate and much better than the other classifiers. 


Sign in / Sign up

Export Citation Format

Share Document