Improved Majority Filtering Algorithm for Cleaning Class Label Noise in Supervised Learning

Author(s):  
Muhammad Ammar MALIK ◽  
Jae Young CHOI ◽  
Moonsoo KANG ◽  
Bumshik LEE
2018 ◽  
Vol 275 ◽  
pp. 2374-2383 ◽  
Author(s):  
Maryam Sabzevari ◽  
Gonzalo Martínez-Muñoz ◽  
Alberto Suárez

Text classification and clustering approach is essential for big data environments. In supervised learning applications many classification algorithms have been proposed. In the era of big data, a large volume of training data is available in many machine learning works. However, there is a possibility of mislabeled or unlabeled data that are not labeled properly. Some labels may be incorrect resulted in label noise which in turn regress learning performance of a classifier. A general approach to address label noise is to apply noise filtering techniques to identify and remove noise before learning. A range of noise filtering approaches have been developed to improve the classifiers performance. This paper proposes noise filtering approach in text data during the training phase. Many supervised learning algorithms generates high error rates due to noise in training dataset, our work eliminates such noise and provides accurate classification system.


2012 ◽  
pp. 660-667
Author(s):  
Óscar Pérez ◽  
Manuel Sánchez-Montañés

Machine learning has provided powerful algorithms that automatically generate predictive models from experience. One specific technique is <i>supervised learning</i>, where the machine is trained to predict a desired output for each input pattern <b>x</b>. This chapter will focus on <i>classification</i>, that is, supervised learning when the output to predict is a class label. For instance predicting whether a patient in a hospital will develop cancer or not. In this example, the class label <i>c</i> is a variable having two possible values, “cancer” or “no cancer”, and the input pattern <b>x</b> is a vector containing patient data (e.g. age, gender, diet, smoking habits, etc.). In order to construct a proper predictive model, supervised learning methods require a set of examples <b>x</b><sub>i</sub> together with their respective labels c<sub>i</sub>. This dataset is called the “training set”. The constructed model is then used to predict the labels of a set of new cases <b>x</b><sub>j</sub> called the “test set”. In the cancer prediction example, this is the phase when the model is used to predict cancer in new patients.<div><br></div><div>One common assumption in supervised learning algorithms is that the statistical structure of the training and test datasets are the same (Hastie, Tibshirani &amp; Friedman, 2001). That is, the test set is assumed to have the same attribute distribution p(<b>x</b>) and same class distribution p(c|<b>x</b>) as the training set. However, this is not usually the case in real applications due to different reasons. For instance, in many problems the training dataset is obtained in a specific manner that differs from the way the test dataset will be generated later. Moreover, the nature of the problem may evolve in time. These phenomena cause p<sup>Tr</sup>(<b>x</b>, c)&nbsp;<img src="http://resources.igi-global.com/Marketing/Preface_Figures/does_not_equal.png">&nbsp;p<sup>Test</sup>(<b>x</b>, c), which can degrade the performance of the model constructed in training.</div><div><br></div><div>Here we present a new algorithm that allows to re-estimate a model constructed in training using the unlabelled test patterns. We show the convergence properties of the algorithm and illustrate its performance with an artificial problem. Finally we demonstrate its strengths in a heart disease diagnosis problem where the training set is taken from a different hospital than the test set. </div>


2017 ◽  
Vol 9 (2) ◽  
pp. 173 ◽  
Author(s):  
Charlotte Pelletier ◽  
Silvia Valero ◽  
Jordi Inglada ◽  
Nicolas Champion ◽  
Claire Marais Sicre ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document