Imbalanced data classification using complementary fuzzy support vector machine techniques and SMOTE

Author(s):  
Ratchakoon Pruengkarn ◽  
Kok Wai Wong ◽  
Chun Che Fung
2018 ◽  
Vol 12 (3) ◽  
pp. 341-347 ◽  
Author(s):  
Feng Wang ◽  
Shaojiang Liu ◽  
Weichuan Ni ◽  
Zhiming Xu ◽  
Zemin Qiu ◽  
...  

2014 ◽  
Vol 47 (9) ◽  
pp. 3158-3167 ◽  
Author(s):  
Yuan-Hai Shao ◽  
Wei-Jie Chen ◽  
Jing-Jing Zhang ◽  
Zhen Wang ◽  
Nai-Yang Deng

2017 ◽  
Vol 14 (3) ◽  
pp. 579-595 ◽  
Author(s):  
Lu Cao ◽  
Hong Shen

Imbalanced datasets exist widely in real life. The identification of the minority class in imbalanced datasets tends to be the focus of classification. As a variant of enhanced support vector machine (SVM), the twin support vector machine (TWSVM) provides an effective technique for data classification. TWSVM is based on a relative balance in the training sample dataset and distribution to improve the classification accuracy of the whole dataset, however, it is not effective in dealing with imbalanced data classification problems. In this paper, we propose to combine a re-sampling technique, which utilizes oversampling and under-sampling to balance the training data, with TWSVM to deal with imbalanced data classification. Experimental results show that our proposed approach outperforms other state-of-art methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Chunye Wu ◽  
Nan Wang ◽  
Yu Wang

Imbalanced data classification is gaining importance in data mining and machine learning. The minority class recall rate requires special treatment in fields such as medical diagnosis, information security, industry, and computer vision. This paper proposes a new strategy and algorithm based on a cost-sensitive support vector machine to improve the minority class recall rate to 1 because the misclassification of even a few samples can cause serious losses in some physical problems. In the proposed method, the modification employs a margin compensation to make the margin lopsided, enabling decision boundary drift. When the boundary reaches a certain position, the minority class samples will be more generalized to achieve the requirement of a recall rate of 1. In the experiments, the effects of different parameters on the performance of the algorithm were analyzed, and the optimal parameters for a recall rate of 1 were determined. The experimental results reveal that, for the imbalanced data classification problem, the traditional definite cost classification scheme and the models classified using the area under the receiver operating characteristic curve criterion rarely produce results such as a recall rate of 1. The new strategy can yield a minority recall of 1 for imbalanced data as the loss of the majority class is acceptable; moreover, it improves the g -means index. The proposed algorithm provides superior performance in minority recall compared to the conventional methods. The proposed method has important practical significance in credit card fraud, medical diagnosis, and other areas.


Sign in / Sign up

Export Citation Format

Share Document