class noise
Recently Published Documents


TOTAL DOCUMENTS

79
(FIVE YEARS 24)

H-INDEX

14
(FIVE YEARS 2)

2021 ◽  
Vol 210 ◽  
pp. 112310
Author(s):  
Arnulf Jentzen ◽  
Felix Lindner ◽  
Primož Pušnik

2021 ◽  
pp. 1-16
Author(s):  
Deepika Singh ◽  
Anju Saha ◽  
Anjana Gosain

Imbalanced dataset classification is challenging because of the severely skewed class distribution. The traditional machine learning algorithms show degraded performance for these skewed datasets. However, there are additional characteristics of a classification dataset that are not only challenging for the traditional machine learning algorithms but also increase the difficulty when constructing a model for imbalanced datasets. Data complexity metrics identify these intrinsic characteristics, which cause substantial deterioration of the learning algorithms’ performance. Though many research efforts have been made to deal with class noise, none of them focused on imbalanced datasets coupled with other intrinsic factors. This paper presents a novel hybrid pre-processing algorithm focusing on treating the class-label noise in the imbalanced dataset, which suffers from other intrinsic factors such as class overlapping, non-linear class boundaries, small disjuncts, and borderline examples. This algorithm uses the wCM complexity metric (proposed for imbalanced dataset) to identify noisy, borderline, and other difficult instances of the dataset and then intelligently handles these instances. Experiments on synthetic datasets and real-world datasets with different levels of imbalance, noise, small disjuncts, class overlapping, and borderline examples are conducted to check the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm offers an interesting alternative to popular state-of-the-art pre-processing algorithms for effectively handling imbalanced datasets along with noise and other difficulties.


2021 ◽  
Vol 177 ◽  
pp. 75-88
Author(s):  
Lorena A. Santos ◽  
Karine R. Ferreira ◽  
Gilberto Camara ◽  
Michelle C.A. Picoli ◽  
Rolf E. Simoes

2021 ◽  
Author(s):  
Benjamin Denham ◽  
Russel Pears ◽  
M. Asif Naeem

Datasets containing class noise present significant challenges to accurate classification, thus requiring classifiers that can refuse to classify noisy instances. We demonstrate the inability of the popular confidence-thresholding rejection method to learn from relationships between input features and not-at-random class noise. To take advantage of these relationships, we propose a novel null-labelling scheme based on iterative re-training with relabelled datasets that uses a classifier to learn to reject instances that are likely to be misclassified. We demonstrate the ability of null-labelling to achieve a significantly better tradeoff between classification error and coverage than the confidence-thresholding method. Models generated by the null-labelling scheme have the added advantage of interpretability, in that they are able to identify features correlated with class noise. We also unify prior theories for combining and evaluating sets of rejecting classifiers.


2020 ◽  
Author(s):  
Benjamin Denham ◽  
Russel Pears ◽  
M. Asif Naeem

Datasets containing class noise present significant challenges to accurate classification, thus requiring classifiers that can refuse to classify noisy instances. We demonstrate the inability of the popular confidence-thresholding rejection method to learn from relationships between input features and not-at-random class noise. To take advantage of these relationships, we propose a novel null-labelling scheme based on iterative re-training with relabelled datasets that uses a classifier to learn to reject instances that are likely to be misclassified. We demonstrate the ability of null-labelling to achieve a significantly better tradeoff between classification error and coverage than the confidence-thresholding method. Models generated by the null-labelling scheme have the added advantage of interpretability, in that they are able to identify features correlated with class noise. We also unify prior theories for combining and evaluating sets of rejecting classifiers.


2020 ◽  
Author(s):  
Benjamin Denham ◽  
Russel Pears ◽  
M. Asif Naeem

Datasets containing class noise present significant challenges to accurate classification, thus requiring classifiers that can refuse to classify noisy instances. We demonstrate the inability of the popular confidence-thresholding rejection method to learn from relationships between input features and not-at-random class noise. To take advantage of these relationships, we propose a novel null-labelling scheme based on iterative re-training with relabelled datasets that uses a classifier to learn to reject instances that are likely to be misclassified. We demonstrate the ability of null-labelling to achieve a significantly better tradeoff between classification error and coverage than the confidence-thresholding method. Models generated by the null-labelling scheme have the added advantage of interpretability, in that they are able to identify features correlated with class noise. We also unify prior theories for combining and evaluating sets of rejecting classifiers.


2020 ◽  
Vol 553 ◽  
pp. 124219 ◽  
Author(s):  
Maryam Samami ◽  
Ebrahim Akbari ◽  
Moloud Abdar ◽  
Pawel Plawiak ◽  
Hossein Nematzadeh ◽  
...  

2020 ◽  
Vol 94 ◽  
pp. 106428
Author(s):  
Zahra Nematzadeh ◽  
Roliana Ibrahim ◽  
Ali Selamat

Sign in / Sign up

Export Citation Format

Share Document