scholarly journals Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study

2020 ◽  
Vol 34 (05) ◽  
pp. 7732-7739
Author(s):  
Jinlan Fu ◽  
Pengfei Liu ◽  
Qi Zhang

While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations? In this paper, we take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives and characterize the differences of their generalization abilities through the lens of our proposed measures, which guides us to better design models and training methods. Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models in terms of breakdown performance analysis, annotation errors, dataset bias, and category relationships, which suggest directions for improvement. We have released the datasets: (ReCoNLL, PLONER) for the future research at our project page: http://pfliu.com/InterpretNER/.

2021 ◽  
pp. 1-12
Author(s):  
Yingwen Fu ◽  
Nankai Lin ◽  
Xiaotian Lin ◽  
Shengyi Jiang

Named entity recognition (NER) is fundamental to natural language processing (NLP). Most state-of-the-art researches on NER are based on pre-trained language models (PLMs) or classic neural models. However, these researches are mainly oriented to high-resource languages such as English. While for Indonesian, related resources (both in dataset and technology) are not yet well-developed. Besides, affix is an important word composition for Indonesian language, indicating the essentiality of character and token features for token-wise Indonesian NLP tasks. However, features extracted by currently top-performance models are insufficient. Aiming at Indonesian NER task, in this paper, we build an Indonesian NER dataset (IDNER) comprising over 50 thousand sentences (over 670 thousand tokens) to alleviate the shortage of labeled resources in Indonesian. Furthermore, we construct a hierarchical structured-attention-based model (HSA) for Indonesian NER to extract sequence features from different perspectives. Specifically, we use an enhanced convolutional structure as well as an enhanced attention structure to extract deeper features from characters and tokens. Experimental results show that HSA establishes competitive performance on IDNER and three benchmark datasets.


2021 ◽  
Author(s):  
Xin Zhang ◽  
Guangwei Xu ◽  
Yueheng Sun ◽  
Meishan Zhang ◽  
Pengjun Xie

2020 ◽  
Author(s):  
Usman Naseem ◽  
Matloob Khushi ◽  
Vinay Reddy ◽  
Sakthivel Rajendran ◽  
Imran Razzak ◽  
...  

Abstract Background: In recent years, with the growing amount of biomedical documents, coupled with advancement in natural language processing algorithms, the research on biomedical named entity recognition (BioNER) has increased exponentially. However, BioNER research is challenging as NER in the biomedical domain are: (i) often restricted due to limited amount of training data, (ii) an entity can refer to multiple types and concepts depending on its context and, (iii) heavy reliance on acronyms that are sub-domain specific. Existing BioNER approaches often neglect these issues and directly adopt the state-of-the-art (SOTA) models trained in general corpora which often yields unsatisfactory results. Results: We propose biomedical ALBERT (A Lite Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) - bioALBERT - an effective domain-specific pre-trained language model trained on huge biomedical corpus designed to capture biomedical context-dependent NER. We adopted self-supervised loss function used in ALBERT that targets on modelling inter-sentence coherence to better learn context-dependent representations and incorporated parameter reduction strategies to minimise memory usage and enhance the training time in BioNER. In our experiments, BioALBERT outperformed comparative SOTA BioNER models on eight biomedical NER benchmark datasets with four different entity types. The performance is increased for; (i) disease type corpora by 7.47% (NCBI-disease) and 10.63% (BC5CDR-disease); (ii) drug-chem type corpora by 4.61% (BC5CDR-Chem) and 3.89 (BC4CHEMD); (iii) gene-protein type corpora by 12.25% (BC2GM) and 6.42% (JNLPBA); and (iv) Species type corpora by 6.19% (LINNAEUS) and 23.71% (Species-800) is observed which leads to a state-of-the-art results. Conclusions: The performance of proposed model on four different biomedical entity types shows that our model is robust and generalizable in recognizing biomedical entities in text. We trained four different variants of BioALBERT models which are available for the research community to be used in future research.


2021 ◽  
Vol 9 ◽  
pp. 1116-1131
Author(s):  
David Ifeoluwa Adelani ◽  
Jade Abbott ◽  
Graham Neubig ◽  
Daniel D’souza ◽  
Julia Kreutzer ◽  
...  

Abstract We take a step towards addressing the under- representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of state- of-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP.1


2020 ◽  
Author(s):  
Hiroki Ouchi ◽  
Jun Suzuki ◽  
Sosuke Kobayashi ◽  
Sho Yokoi ◽  
Tatsuki Kuribayashi ◽  
...  

2021 ◽  
pp. 1-13
Author(s):  
Ankur Priyadarshi ◽  
Sujan Kumar Saha

In this paper, we present our effort on the development of a Maithili Named Entity Recognition (NER) system. Maithili is one of the official languages of India, with around 50 million native speakers. Although various NER systems have been developed in several Indian languages, we did not find any openly available NER resource or system in Maithili. For the development, we manually annotated a Maithili NER corpus containing around 200K words. We prepared a baseline classifier using Conditional Random Fields (CRF). Then we ran many experiments using various recurrent neural networks (RNN). We collected larger raw corpus to obtain better word embedding and character embedding. In our experiments, we found, neural models are better than CRF; a CRF layer is effective for the prediction of the final output in the RNN models; character embedding is effective in Maithili language. We also investigated the effectiveness of gazetteer lists in neural models. We prepared a few gazetteer lists from various web resources and used those in the neural models. The incorporation of the gazetteer layer caused performance improvement. The final system achieved an f-measure of 91.6% with 94.9% precision and 88.53% recall.


Sign in / Sign up

Export Citation Format

Share Document