scholarly journals Classifier learning and modality in a polyglot savant

Lingua ◽  
2007 ◽  
Vol 117 (7) ◽  
pp. 1339-1353 ◽  
Author(s):  
Gary Morgan ◽  
Neil Smith ◽  
Ianthi Tsimpli ◽  
Bencie Woll
Keyword(s):  
2020 ◽  
Vol 34 (04) ◽  
pp. 4667-4674 ◽  
Author(s):  
Shikun Li ◽  
Shiming Ge ◽  
Yingying Hua ◽  
Chunhui Zhang ◽  
Hao Wen ◽  
...  

Typically, learning a deep classifier from massive cleanly annotated instances is effective but impractical in many real-world scenarios. An alternative is collecting and aggregating multiple noisy annotations for each instance to train the classifier. Inspired by that, this paper proposes to learn deep classifier from multiple noisy annotators via a coupled-view learning approach, where the learning view from data is represented by deep neural networks for data classification and the learning view from labels is described by a Naive Bayes classifier for label aggregation. Such coupled-view learning is converted to a supervised learning problem under the mutual supervision of the aggregated and predicted labels, and can be solved via alternate optimization to update labels and refine the classifiers. To alleviate the propagation of incorrect labels, small-loss metric is proposed to select reliable instances in both views. A co-teaching strategy with class-weighted loss is further leveraged in the deep classifier learning, which uses two networks with different learning abilities to teach each other, and the diverse errors introduced by noisy labels can be filtered out by peer networks. By these strategies, our approach can finally learn a robust data classifier which less overfits to label noise. Experimental results on synthetic and real data demonstrate the effectiveness and robustness of the proposed approach.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 54494-54505
Author(s):  
Chunyu Yang ◽  
Weiwei Wang ◽  
Xiangchu Feng ◽  
Shuisheng Zhou
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document