Fuzzy Rough Support Vector Machine for Data Classification

2016 ◽  
Vol 5 (2) ◽  
pp. 26-53 ◽  
Author(s):  
Arindam Chaudhuri

In this paper, classification task is performed by FRSVM. It is variant of FSVM and MFSVM. Fuzzy rough set takes care of sensitiveness of noisy samples and handles impreciseness. The membership function is developed as function of cener and radius of each class in feature space. It plays an important role towards sampling the decision surface. The training samples are either linear or nonlinear separable. In nonlinear training samples, input space is mapped into high dimensional feature space to compute separating surface. The different input points make unique contributions to decision surface. The performance of the classifier is assessed in terms of the number of support vectors. The effect of variability in prediction and generalization of FRSVM is examined with respect to values of C. It effectively resolves imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. Experimental results on both synthetic and real datasets support that FRSVM achieves superior performance in reducing outliers' effects than existing SVMs.

2006 ◽  
Vol 18 (6) ◽  
pp. 744-750
Author(s):  
Ryouta Nakano ◽  
◽  
Kazuhiro Hotta ◽  
Haruhisa Takahashi

This paper presents an object detection method using independent local feature extractor. Since objects are composed of a combination of characteristic parts, a good object detector could be developed if local parts specialized for a detection target are derived automatically from training samples. To do this, we use Independent Component Analysis (ICA) which decomposes a signal into independent elementary signals. We then used the basis vectors derived by ICA as independent local feature extractors specialized for a detection target. These feature extractors are applied to a candidate area, and their outputs are used in classification. However, the number of dimension of extracted independent local features is very high. To reduce the extracted independent local features efficiently, we use Higher-order Local AutoCorrelation (HLAC) features to extract the information that relates neighboring features. This may be more effective for object detection than simple independent local features. To classify detection targets and non-targets, we use a Support Vector Machine (SVM). The proposed method is applied to a car detection problem. Superior performance is obtained by comparison with Principal Component Analysis (PCA).


2021 ◽  
Author(s):  
Takumi Sonoda ◽  
Masaya Nakata

Surrogate-assisted multi-objective evolutionary algorithms have advanced the field of computationally expensive optimization, but their progress is often restricted to low-dimensional problems. This manuscript presents a multiple classifiers-assisted evolutionary algorithm based on decomposition, which is adapted for high-dimensional expensive problems in terms of the following two insights. Compared to approximation-based surrogates, the accuracy of classification-based surrogates is robust for few high-dimensional training samples. Further, multiple local classifiers can hedge the risk of over-fitting issues. Accordingly, the proposed algorithm builds multiple classifiers with support vector machines on a decomposition-based multi-objective algorithm, wherein each local classifier is trained for a corresponding scalarization function. Experimental results statistically confirm that the proposed algorithm is competitive to the state-of-the-art algorithms and computationally efficient as well.


2021 ◽  
Author(s):  
Takumi Sonoda ◽  
Masaya Nakata

Surrogate-assisted multi-objective evolutionary algorithms have advanced the field of computationally expensive optimization, but their progress is often restricted to low-dimensional problems. This manuscript presents a multiple classifiers-assisted evolutionary algorithm based on decomposition, which is adapted for high-dimensional expensive problems in terms of the following two insights. Compared to approximation-based surrogates, the accuracy of classification-based surrogates is robust for few high-dimensional training samples. Further, multiple local classifiers can hedge the risk of over-fitting issues. Accordingly, the proposed algorithm builds multiple classifiers with support vector machines on a decomposition-based multi-objective algorithm, wherein each local classifier is trained for a corresponding scalarization function. Experimental results statistically confirm that the proposed algorithm is competitive to the state-of-the-art algorithms and computationally efficient as well.


Author(s):  
Minghe Sun

As machine learning techniques, support vector machines are quadratic programming models and are recent revolutionary development for classification analysis. Primal and dual formulations of support vector machine models for both two-class and multi-class classification are discussed. The dual formulations in high dimensional feature space using inner product kernels are emphasized. Nonlinear classification function or discriminant functions in high dimensional feature spaces can be constructed through the use of inner product kernels without actually mapping the data from the input space to the high dimensional feature spaces. Furthermore, the size of the dual formulation is independent of the dimension of the input space and independent of the kernels used. Two illustrative examples, one for two-class and the other for multi-class classification, are used to demonstrate the formulations of these SVM models.


2019 ◽  
Vol 10 (4) ◽  
pp. 25-37
Author(s):  
Ayodele Lasisi ◽  
Nasser Tairan ◽  
Rozaida Ghazali ◽  
Wali Khan Mashwani ◽  
Sultan Noman Qasem ◽  
...  

The need to accurately predict and make right decisions regarding crude oil price motivates the proposition of an alternative algorithmic method based on real-valued negative selection with variable-sized detectors (V-Detectors), by incorporating with fuzzy-rough set feature selection (FRFS) for predicting the most appropriate choices. The objective of this study is enhancing the performance of V-Detectors using FRFS for prices of crude oil. Applying FRFS serves to prune the number of features by retaining the most informative and critical features. The V-Detectors then trains and tests the features. Different radius values are applied for V-Detectors. Experimental outcome in comparison with established algorithms such as support vector machine, naïve bayes, multi-layer perceptron, J48, non-nested generalized exemplars, IBk, fuzzy-roughNN, and vaguely quantified nearest neighbor demonstrates that FRFS-V-Detectors is proficient and valuable for insightful knowledge on crude oil price. Thus, it can assist in establishing oil price market policies on the international scale.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3153 ◽  
Author(s):  
Fei Deng ◽  
Shengliang Pu ◽  
Xuehong Chen ◽  
Yusheng Shi ◽  
Ting Yuan ◽  
...  

Deep learning techniques have boosted the performance of hyperspectral image (HSI) classification. In particular, convolutional neural networks (CNNs) have shown superior performance to that of the conventional machine learning algorithms. Recently, a novel type of neural networks called capsule networks (CapsNets) was presented to improve the most advanced CNNs. In this paper, we present a modified two-layer CapsNet with limited training samples for HSI classification, which is inspired by the comparability and simplicity of the shallower deep learning models. The presented CapsNet is trained using two real HSI datasets, i.e., the PaviaU (PU) and SalinasA datasets, representing complex and simple datasets, respectively, and which are used to investigate the robustness or representation of every model or classifier. In addition, a comparable paradigm of network architecture design has been proposed for the comparison of CNN and CapsNet. Experiments demonstrate that CapsNet shows better accuracy and convergence behavior for the complex data than the state-of-the-art CNN. For CapsNet using the PU dataset, the Kappa coefficient, overall accuracy, and average accuracy are 0.9456, 95.90%, and 96.27%, respectively, compared to the corresponding values yielded by CNN of 0.9345, 95.11%, and 95.63%. Moreover, we observed that CapsNet has much higher confidence for the predicted probabilities. Subsequently, this finding was analyzed and discussed with probability maps and uncertainty analysis. In terms of the existing literature, CapsNet provides promising results and explicit merits in comparison with CNN and two baseline classifiers, i.e., random forests (RFs) and support vector machines (SVMs).


Sign in / Sign up

Export Citation Format

Share Document