CJC-Net: A Cyclical Training Method with Joint Loss and Co-teaching Strategy Net for Deep Learning under Noisy Labels

Author(s):  
Qian Zhang ◽  
Feifei Lee ◽  
Ya-gang Wang ◽  
Damin Ding ◽  
Shuai Yang ◽  
...  
2021 ◽  
Vol 14 (6) ◽  
pp. 863-863
Author(s):  
Supun Nakandala ◽  
Yuhao Zhang ◽  
Arun Kumar

We discovered that there was an inconsistency in the communication cost formulation for the decentralized fine-grained training method in Table 2 of our paper [1]. We used Horovod as the archetype for decentralized fine-grained approaches, and its correct communication cost is higher than what we had reported. So, we amend the communication cost of decentralized fine-grained to [EQUATION]


Author(s):  
JaeGu Lee ◽  
Yeo Min Yoon ◽  
Seon Geol Kim ◽  
Chang Woo Ha ◽  
Seong Baek Yoon ◽  
...  

2021 ◽  
Author(s):  
Zhida Chen ◽  
Chuan Lin ◽  
ChangLei Cao ◽  
Guang Gao ◽  
Liangzhong Ying

2015 ◽  
Author(s):  
Xiaohui Zhang ◽  
Daniel Povey ◽  
Sanjeev Khudanpur

Author(s):  
Eun Young Seo ◽  
Yeon Joon Choi ◽  
Jong-Hwan Kim ◽  
Sang-Hyo Kim

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2360
Author(s):  
Tao Feng ◽  
Jiange Liu ◽  
Xia Fang ◽  
Jie Wang ◽  
Libin Zhou

In this paper, a complete system based on computer vision and deep learning is proposed for surface inspection of the armatures in a vibration motor with miniature volume. A device for imaging and positioning was designed in order to obtain the images of the surface of the armatures. The images obtained by the device were divided into a training set and a test set. With continuous experimental exploration and improvement, the most efficient deep-network model was designed. The results show that the model leads to high accuracy on both the training set and the test set. In addition, we proposed a training method to make the network designed by us perform better. To guarantee the quality of the motor, a double-branch discrimination mechanism was also proposed. In order to verify the reliability of the system, experimental verification was conducted on the production line, and a satisfactory discrimination performance was reached. The results indicate that the proposed detection system for the armatures based on computer vision and deep learning is stable and reliable for armature production lines.


2020 ◽  
Author(s):  
Li Ding ◽  
Ajay E. Kuriyan ◽  
Rajeev S. Ramchandran ◽  
Charles C. Wykoff ◽  
Gaurav Sharma

<div>We propose a deep-learning based annotation efficient framework for vessel detection in ultra-widefield (UWF) fundus photography (FP) that does not require de novo labeled UWF FP vessel maps. Our approach utilizes concurrently captured UWF fluorescein angiography (FA) images, for which effective deep learning approaches have recently become available, and iterates between a multi-modal registration step and a weakly-supervised learning step. In the registration step, the UWF FA vessel maps detected with a pre-trained deep neural network (DNN) are registered with the UWF FP via parametric chamfer alignment. The warped vessel maps can be used as the tentative training data but inevitably contain incorrect (noisy) labels due to the differences between FA and FP modalities and the errors in the registration. In the learning step, a robust learning method is proposed to train DNNs with noisy labels. The detected FP vessel maps are used for the registration in the following iteration. The registration and the vessel detection benefit from each other and are progressively improved. Once trained, the UWF FP vessel detection DNN from the proposed approach allows FP vessel detection without requiring concurrently captured UWF FA images. We validate the proposed framework on a new UWF FP dataset, PRIMEFP20, and on existing narrow field FP datasets. Experimental evaluation, using both pixel wise metrics and the CAL metrics designed to provide better agreement with human assessment, shows that the proposed approach provides accurate vessel detection, without requiring manually labeled UWF FP training data.</div>


Sign in / Sign up

Export Citation Format

Share Document