Robust Training of Deep Neural Networks with Noisy Labels by Graph Label Propagation

Author(s):  
Yuichiro Nomura ◽  
Takio Kurita
Author(s):  
Yun-Peng Liu ◽  
Ning Xu ◽  
Yu Zhang ◽  
Xin Geng

The performances of deep neural networks (DNNs) crucially rely on the quality of labeling. In some situations, labels are easily corrupted, and therefore some labels become noisy labels. Thus, designing algorithms that deal with noisy labels is of great importance for learning robust DNNs. However, it is difficult to distinguish between clean labels and noisy labels, which becomes the bottleneck of many methods. To address the problem, this paper proposes a novel method named Label Distribution based Confidence Estimation (LDCE). LDCE estimates the confidence of the observed labels based on label distribution. Then, the boundary between clean labels and noisy labels becomes clear according to confidence scores. To verify the effectiveness of the method, LDCE is combined with the existing learning algorithm to train robust DNNs. Experiments on both synthetic and real-world datasets substantiate the superiority of the proposed algorithm against state-of-the-art methods.


Land ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 271
Author(s):  
Chuanpeng Zhao ◽  
Yaohuan Huang

Land cover is one of key indicators for modeling ecological, environmental, and climatic processes, which changes frequently due to natural factors and anthropogenic activities. The changes demand various samples for updating land cover maps, although in reality the number of samples is always insufficient. Sample augment methods can fill this gap, but these methods still face difficulties, especially for high-resolution remote sensing data. The difficulties include the following: (1) excessive human involvement, which is mostly caused by human interpretation, even by active learning-based methods; (2) large variations of segmented land cover objects, which affects the generalization to unseen areas especially for proposed methods that are validated in small study areas. To solve these problems, we proposed a sample augment method incorporating the deep neural networks using a Gaofen-2 image. To avoid error accumulation, the neural network-based sample augment (NNSA) framework employs non-iterative procedure, and augments from 184 image objects with labels to 75,112 samples. The overall accuracy (OA) of NNSA is 20% higher than that of label propagation (LP) in reference to expert interpreted results; the LP has an OA of 61.16%. The accuracy decreases by approximately 10% in the coastal validation area, which has different characteristics from the inland samples. We also compared the iterative and non-iterative strategies without external information added. The results of the validation area containing original samples show that non-iterative methods have a higher OA and a lower sample imbalance. The NNSA method that augments sample size with higher accuracy can benefit the update of land cover information.


2021 ◽  
pp. 78-89
Author(s):  
Panle Li ◽  
Xiaohui He ◽  
Dingjun Song ◽  
Zihao Ding ◽  
Mengjia Qiao ◽  
...  

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 66998-67005 ◽  
Author(s):  
Kyeongbo Kong ◽  
Junggi Lee ◽  
Youngchul Kwak ◽  
Minsung Kang ◽  
Seong Gyun Kim ◽  
...  

Author(s):  
Xian-Jin Gui ◽  
Wei Wang ◽  
Zhang-Hao Tian

Deep neural networks need large amounts of labeled data to achieve good performance. In real-world applications, labels are usually collected from non-experts such as crowdsourcing to save cost and thus are noisy. In the past few years, deep learning methods for dealing with noisy labels have been developed, many of which are based on the small-loss criterion. However, there are few theoretical analyses to explain why these methods could learn well from noisy labels. In this paper, we theoretically explain why the widely-used small-loss criterion works. Based on the explanation, we reformalize the vanilla small-loss criterion to better tackle noisy labels. The experimental results verify our theoretical explanation and also demonstrate the effectiveness of the reformalization.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

Sign in / Sign up

Export Citation Format

Share Document