scholarly journals Embedding Undersampling Rotation Forest for Imbalanced Problem

2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Huaping Guo ◽  
Xiaoyu Diao ◽  
Hongbing Liu

Rotation Forest is an ensemble learning approach achieving better performance comparing to Bagging and Boosting through building accurate and diverse classifiers using rotated feature space. However, like other conventional classifiers, Rotation Forest does not work well on the imbalanced data which are characterized as having much less examples of one class (minority class) than the other (majority class), and the cost of misclassifying minority class examples is often much more expensive than the contrary cases. This paper proposes a novel method called Embedding Undersampling Rotation Forest (EURF) to handle this problem (1) sampling subsets from the majority class and learning a projection matrix from each subset and (2) obtaining training sets by projecting re-undersampling subsets of the original data set to new spaces defined by the matrices and constructing an individual classifier from each training set. For the first method, undersampling is to force the rotation matrix to better capture the features of the minority class without harming the diversity between individual classifiers. With respect to the second method, the undersampling technique aims to improve the performance of individual classifiers on the minority class. The experimental results show that EURF achieves significantly better performance comparing to other state-of-the-art methods.

2011 ◽  
Vol 219-220 ◽  
pp. 151-155 ◽  
Author(s):  
Hua Ji ◽  
Hua Xiang Zhang

In many real-world domains, learning from imbalanced data sets is always confronted. Since the skewed class distribution brings the challenge for traditional classifiers because of much lower classification accuracy on rare classes, we propose the novel method on classification with local clustering based on the data distribution of the imbalanced data sets to solve this problem. At first, we divide the whole data set into several data groups based on the data distribution. Then we perform local clustering within each group both on the normal class and the disjointed rare class. For rare class, the subsequent over-sampling is employed according to the different rates. At last, we apply support vector machines (SVMS) for classification, by means of the traditional tactic of the cost matrix to enhance the classification accuracies. The experimental results on several UCI data sets show that this method can produces much higher prediction accuracies on the rare class than state-of-art methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Wan-Wei Fan ◽  
Ching-Hung Lee

This paper proposes a method to treat the classification of imbalanced data by adding noise to the feature space of convolutional neural network (CNN) without changing a data set (ratio of majority and minority data). Besides, a hybrid loss function of crossentropy and KL divergence is proposed. The proposed approach can improve the accuracy of minority class in the testing data. In addition, a simple design method for selecting structure of CNN is first introduced and then, we add noise in feature space of CNN to obtain proper features by a training process and to improve the classification results. From comparison results, we can find that the proposed method can extract the suitable features to improve the accuracy of minority class. Finally, illustrated examples of multiclass classification problems and the corresponding discussion in balance ratio are presented. Our approach performs well with smaller network structure compared with other deep models. In addition, the performance is improved over 40% in defective accuracy by adding noise approach. Finally, the accuracy is higher than 96%; even the imbalanced ratio (IR) is one hundred.


2021 ◽  
Vol 25 (5) ◽  
pp. 1169-1185
Author(s):  
Deniu He ◽  
Hong Yu ◽  
Guoyin Wang ◽  
Jie Li

The problem of initialization of active learning is considered in this paper. Especially, this paper studies the problem in an imbalanced data scenario, which is called as class-imbalance active learning cold-start. The novel method is two-stage clustering-based active learning cold-start (ALCS). In the first stage, to separate the instances of minority class from that of majority class, a multi-center clustering is constructed based on a new inter-cluster tightness measure, thus the data is grouped into multiple clusters. Then, in the second stage, the initial training instances are selected from each cluster based on an adaptive candidate representative instances determination mechanism and a clusters-cyclic instance query mechanism. The comprehensive experiments demonstrate the effectiveness of the proposed method from the aspects of class coverage, classification performance, and impact on active learning.


Author(s):  
Huaping GUO ◽  
Xiaoyu DIAO ◽  
Hongbing LIU

As one of the most challenging and attractive issues in pattern recognition and machine learning, the imbalanced problem has attracted increasing attention. For two-class data, imbalanced data are characterized by the size of one class (majority class) being much larger than that of the other class (minority class), which makes the constructed models focus more on the majority class and ignore or even misclassify the examples of the minority class. The undersampling-based ensemble, which learns individual classifiers from undersampled balanced data, is an effective method to cope with the class-imbalance data. The problem in this method is that the size of the dataset to train each classifier is notably small; thus, how to generate individual classifiers with high performance from the limited data is a key to the success of the method. In this paper, rotation forest (an ensemble method) is used to improve the performance of the undersampling-based ensemble on the imbalanced problem because rotation forest has higher performance than other ensemble methods such as bagging, boosting, and random forest, particularly for small-sized data. In addition, rotation forest is more sensitive to the sampling technique than some robust methods including SVM and neural networks; thus, it is easier to create individual classifiers with diversity using rotation forest. Two versions of the improved undersampling-based ensemble methods are implemented: 1) undersampling subsets from the majority class and learning each classifier using the rotation forest on the data obtained by combing each subset with the minority class and 2) similarly to the first method, with the exception of removing the majority class examples that are correctly classified with high confidence after learning each classifier for further consideration. The experimental results show that the proposed methods show significantly better performance on measures of recall, g-mean, f-measure, and AUC than other state-of-the-art methods on 30 datasets with various data distributions and different imbalance ratios.


2011 ◽  
Vol 63 (2) ◽  
pp. 300-346 ◽  
Author(s):  
Philip Roessler

Why do rulers employ ethnic exclusion at the risk of civil war? Focusing on the region of sub-Saharan Africa, the author attributes this costly strategy to the commitment problem that arises in personalist regimes between elites who hold joint control of the state's coercive apparatus. As no faction can be sure that others will not exploit their violent capabilities to usurp power, elites maneuver to protect their privileged position and safeguard against others' first-a rising internal threat, rulers move to eliminate their rivals to guarantee their personal and political survival. But the cost of such a strategy, especially when carried out along ethnic lines, is that it increases the risk of a future civil war. To test this argument, the author employs the Ethnic Power Relations data set combined with original data on the ethnicity of conspirators of coups and rebellions in Africa. He finds that in Africa ethnic exclusion substitutes civil war risk for coup risk. And rulers are significantly more likely to exclude their coconspirators—the very friends and allies who helped them come to power—than other included groups, but at the cost of increasing the risk of a future civil war with their former allies. In the first three years after being purged from the central government, coconspirators and their coethnics are sixteen times more likely to rebel than when they were represented at the apex of the regime.


Author(s):  
M. Aldiki Febriantono ◽  
Sholeh Hadi Pramono ◽  
Rahmadwati Rahmadwati ◽  
Golshah Naghdy

The multiclass imbalanced data problems in data mining were an interesting to study currently. The problems had an influence on the classification process in machine learning processes. Some cases showed that minority class in the dataset had an important information value compared to the majority class. When minority class was misclassification, it would affect the accuracy value and classifier performance. In this research, cost sensitive decision tree C5.0 was used to solve multiclass imbalanced data problems. The first stage, making the decision tree model uses the C5.0 algorithm then the cost sensitive learning uses the metacost method to obtain the minimum cost model. The results of testing the C5.0 algorithm had better performance than C4.5 and ID3 algorithms. The percentage of algorithm performance from C5.0, C4.5 and ID3 were 40.91%, 40, 24% and 19.23%.


Author(s):  
VEYIS GUNES ◽  
MICHEL MENARD ◽  
PIERRE LOONIS

This article aims at showing that supervised and unsupervised learnings are not competitive, but complementary methods. We propose to use a fuzzy clustering method with ambiguity rejection to guide the supervised learning performed by bayesian classifiers. This method detects ambiguous or mixed areas of a learning set. The problem is seen from the multi-decision point of view (i.e. several classification modules). Each classification module is specialized on a particular region of the feature space. These regions are obtained by fuzzy clustering and constitute the original data set by union. A classifier is associated with each cluster. The training set for each classifier is then defined on the cluster and its associated ambiguous clusters. The overall system is parallel, since different classifiers work with their own training data clusters. The algorithm makes possible the adaptive classifier selection in the sense that the fuzzy clustering with ambiguity rejection gives adapted training data regions of the feature space. The decision making is the fusion of outputs from the most adapted classifiers.


2011 ◽  
Vol 271-273 ◽  
pp. 1291-1296
Author(s):  
Jin Wei Zhang ◽  
Hui Juan Lu ◽  
Wu Tao Chen ◽  
Yi Lu

The classifier, built from a highly-skewed class distribution data set, generally predicts an unknown sample as the majority class much more frequently than the minority class. This is due to the fact that the aim of classifier is designed to get the highest classification accuracy. We compare three classification methods dealing with the data sets in which class distribution is imbalanced and has non-uniform misclassification cost, namely cost-sensitive learning method whose misclassification cost is embedded in the algorithm, over-sampling method and under-sampling method. In this paper, we compare these three methods to determine which one will produce the best overall classification under any circumstance. We have the following conclusion: 1. Cost-sensitive learning is suitable for the classification of imbalanced dataset. It outperforms sampling methods overall, and is more stable than sampling methods except the condition that data set is quite small. 2. If the dataset is highly skewed or quite small, over-sampling methods may be better.


2018 ◽  
Vol 27 (06) ◽  
pp. 1850025 ◽  
Author(s):  
Huaping Guo ◽  
Jun Zhou ◽  
Chang-an Wu ◽  
Wei She

Class-imbalance is very common in real world. However, conventional advanced methods do not work well on imbalanced data due to imbalanced class distribution. This paper proposes a simple but effective Hybrid-based Ensemble (HE) to deal with two-class imbalanced problem. HE learns a hybrid ensemble using the following two stages: (1) learning several projection matrixes from the rebalanced data obtained by under-sampling the original training set and constructing new training sets by projecting the original training set to different spaces defined by the matrixes, and (2) undersampling several subsets from each new training set and training a model on each subset. Here, feature projection aims to improve the diversity between ensemble members and under-sampling technique is to improve generalization ability of individual members on minority class. Experimental results show that, compared with other state-of-the-art methods, HE shows significantly better performance on measures of AUC, G-mean, F-measure and recall.


Sign in / Sign up

Export Citation Format

Share Document