Unsupervised cross-domain named entity recognition using entity-aware adversarial training

2020 ◽  
Author(s):  
Qi Peng ◽  
Changmeng Zheng ◽  
Yi Cai ◽  
Tao Wang ◽  
Haoran Xie ◽  
...  
2021 ◽  
pp. 1-10
Author(s):  
Zhucong Li ◽  
Zhen Gan ◽  
Baoli Zhang ◽  
Yubo Chen ◽  
Jing Wan ◽  
...  

Abstract This paper describes our approach for the Chinese Medical named entity recognition(MER) task organized by the 2020 China conference on knowledge graph and semantic computing(CCKS) competition. In this task, we need to identify the entity boundary and category labels of six entities from Chinese electronic medical record(EMR). We construct a hybrid system composed of a semi-supervised noisy label learning model based on adversarial training and a rule postprocessing module. The core idea of the hybrid system is to reduce the impact of data noise by optimizing the model results. Besides, we use post-processing rules to correct three cases of redundant labeling, missing labeling, and wrong labeling in the model prediction results. Our method proposed in this paper achieved strict criteria of 0.9156 and relax criteria of 0.9660 on the final test set, ranking first.


2021 ◽  
Vol 9 ◽  
pp. 586-604
Author(s):  
Abbas Ghaddar ◽  
Philippe Langlais ◽  
Ahmad Rashid ◽  
Mehdi Rezagholizadeh

Abstract In this work, we examine the ability of NER models to use contextual information when predicting the type of an ambiguous entity. We introduce NRB, a new testbed carefully designed to diagnose Name Regularity Bias of NER models. Our results indicate that all state-of-the-art models we tested show such a bias; BERT fine-tuned models significantly outperforming feature-based (LSTM-CRF) ones on NRB, despite having comparable (sometimes lower) performance on standard benchmarks. To mitigate this bias, we propose a novel model-agnostic training method that adds learnable adversarial noise to some entity mentions, thus enforcing models to focus more strongly on the contextual signal, leading to significant gains on NRB. Combining it with two other training strategies, data augmentation and parameter freezing, leads to further gains.


Sign in / Sign up

Export Citation Format

Share Document