scholarly journals Data Augmentation Techniques on Arabic Data for Named Entity Recognition

2021 ◽  
Vol 189 ◽  
pp. 292-299
Author(s):  
Caroline Sabty ◽  
Islam Omar ◽  
Fady Wasfalla ◽  
Mohamed Islam ◽  
Slim Abdennadher
2020 ◽  
Author(s):  
蓬辉 王

BACKGROUND Chinese clinical named entity recognition, as a fundamental task of Chinese medical information extraction, plays an important role in recognizing medical entities contained in Chinese electronic medical records. Limited to lack of large annotated data, existing methods concentrate on employing external resources to improve the performance of clinical named entity recognition, which require lots of time and efficient rules to add external resources. OBJECTIVE To solve the problem of lack of large annotated data, we employ data augmentation without external resource to automatically generate more medical data depending on entities and non-entities in the training set, and enlarge training dataset to improve the performance of named entity recognition. METHODS In this paper, we propose a method of data augmentation, based on sequence generative adversarial network, to enlarge the training set. Different from other sequence generative adversarial networks, where the basic element is character or word, the basic element of our generated sequence is entity or non-entity. In our model, the generator can generate new sentences composed of entities and non-entities based on the learned hidden relationship between the entities and non-entities in the training set and the discriminator can judge if the generated sentences are positive and give rewards to help train the generator. The generated data from sequence adversarial network is used to enlarge the training set and improve the performance of named entity recognition in medical records. RESULTS Without external resource, we employ our data augmentation method in three datasets, both in general domains and medical domain. Experiments show that when we use generated data from data augmentation to expand training set, named entity recognition system has achieved competitive performance compared with existing methods, which shows the effectiveness of our data augmentation method. In general domains, our method achieves an overall F1-score of 59.42% in Weibo NER dataset and a F1-score of 95.28% in Resume. In medical domain, our method achieves 83.40%. CONCLUSIONS Our data augmentation method can expand training set based on the hidden relationship between entities and non-entities in the dataset, which can alleviate the problem of lack of labeled data while avoid using external resource. At the same time, our method can improve the performance of named entity recognition not only in general domains but also medical domain.


2021 ◽  
Vol 9 ◽  
pp. 586-604
Author(s):  
Abbas Ghaddar ◽  
Philippe Langlais ◽  
Ahmad Rashid ◽  
Mehdi Rezagholizadeh

Abstract In this work, we examine the ability of NER models to use contextual information when predicting the type of an ambiguous entity. We introduce NRB, a new testbed carefully designed to diagnose Name Regularity Bias of NER models. Our results indicate that all state-of-the-art models we tested show such a bias; BERT fine-tuned models significantly outperforming feature-based (LSTM-CRF) ones on NRB, despite having comparable (sometimes lower) performance on standard benchmarks. To mitigate this bias, we propose a novel model-agnostic training method that adds learnable adversarial noise to some entity mentions, thus enforcing models to focus more strongly on the contextual signal, leading to significant gains on NRB. Combining it with two other training strategies, data augmentation and parameter freezing, leads to further gains.


2021 ◽  
Author(s):  
Arslan Erdengasileng ◽  
Keqiao Li ◽  
Qing Han ◽  
Shubo Tian ◽  
Jian Wang ◽  
...  

Identification and indexing of chemical compounds in full-text articles are essential steps in biomedical article categorization, information extraction, and biological text mining. BioCreative Challenge was established to evaluate methods for biological text mining and information extraction. Track 2 of BioCreative VII (summer 2021) consists of two subtasks: chemical identification and chemical indexing in full-text PubMed articles. The chemical identification subtask also includes two parts: chemical named entity recognition (NER) and chemical normalization. In this paper, we present our work on developing a hybrid pipeline for chemical named entity recognition, chemical normalization, and chemical indexing in full-text PubMed articles. Specifically, we applied BERT-based methods for chemical NER and chemical indexing, and a sieve-based dictionary matching method for chemical normalization. For subtask 1, we used PubMedBERT with data augmentation on the chemical NER task. Several chemical-MeSH dictionaries including MeSH.XML, SUPP.XML, MRCONSO.RFF, and PubTator chemical annotations are used in a specific order to get the best performance on chemical normalization. We achieved an F1 score of 0.86 and 0.7668 on chemical NER and chemical normalization, respectively. For subtask 2, we formulated it as a binary prediction problem for each individual chemical compound name. We then used a BERT-based model with engineered features and achieved a strict F1 score of 0.4825 on the test set, which is substantially higher than the median F1 score (0.3971) of all the submissions.


2020 ◽  
Vol 10 (12) ◽  
pp. 4234 ◽  
Author(s):  
Hung-Kai Kung ◽  
Chun-Mo Hsieh ◽  
Cheng-Yu Ho ◽  
Yun-Cheng Tsai ◽  
Hao-Yung Chan ◽  
...  

This research aims to build a Mandarin named entity recognition (NER) module using transfer learning to facilitate damage information gathering and analysis in disaster management. The hybrid NER approach proposed in this research includes three modules: (1) data augmentation, which constructs a concise data set for disaster management; (2) reference model, which utilizes the bidirectional long short-term memory–conditional random field framework to implement NER; and (3) the augmented model built by integrating the first two modules via cross-domain transfer with disparate label sets. Through the combination of established rules and learned sentence patterns, the hybrid approach performs well in NER tasks for disaster management and recognizes unfamiliar words successfully. This research applied the proposed NER module to disaster management. In the application, we favorably handled the NER tasks of our related work and achieved our desired outcomes. Through proper transfer, the results of this work can be extended to other fields and consequently bring valuable advantages in diverse applications.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 786
Author(s):  
Siqi Chen ◽  
Yijie Pei ◽  
Zunwang Ke ◽  
Wushour Silamu

Named entity recognition (NER) is an important task in the processing of natural language, which needs to determine entity boundaries and classify them into pre-defined categories. For low-resource languages, most state-of-the-art systems require tens of thousands of annotated sentences to obtain high performance. However, there is minimal annotated data available about Uyghur and Hungarian (UH languages) NER tasks. There are also specificities in each task—differences in words and word order across languages make it a challenging problem. In this paper, we present an effective solution to providing a meaningful and easy-to-use feature extractor for named entity recognition tasks: fine-tuning the pre-trained language model. Therefore, we propose a fine-tuning method for a low-resource language model, which constructs a fine-tuning dataset through data augmentation; then the dataset of a high-resource language is added; and finally the cross-language pre-trained model is fine-tuned on this dataset. In addition, we propose an attention-based fine-tuning strategy that uses symmetry to better select relevant semantic and syntactic information from pre-trained language models and apply these symmetry features to name entity recognition tasks. We evaluated our approach on Uyghur and Hungarian datasets, which showed wonderful performance compared to some strong baselines. We close with an overview of the available resources for named entity recognition and some of the open research questions.


Author(s):  
M. M. Tikhomirov ◽  
◽  
N. V. Loukachevitch ◽  
A. Yu. Sirotina ◽  
B. V. Dobrov ◽  
...  

The paper presents the results of applying the BERT representation model in the named entity recognition task for the cybersecurity domain in Russian. Several variants of the model were investigated. The best results were obtained using the BERT model, trained on the target collection of information security texts. This model achieved results, which were 15 percentage points of F1-macro measure greater than results of CRF, the best method in previous experiments for the same task and data. We also explored a new form of data augmentation for the task of named entity recognition.


Sign in / Sign up

Export Citation Format

Share Document