scholarly journals BE-BLC: BERT-ELMO-Based Deep Neural Network Architecture for English Named Entity Recognition Task

2021 ◽  
Vol 192 ◽  
pp. 168-181
Author(s):  
Manel Affi ◽  
Chiraz Latiri
Author(s):  
M. Bevza

We analyze neural network architectures that yield state of the art results on named entity recognition task and propose a new architecture for improving results even further. We have analyzed a number of ideas and approaches that researchers have used to achieve state of the art results in a variety of NLP tasks. In this work, we present a few of them which we consider to be most likely to improve existing state of the art solutions for named entity recognition task. The architecture is inspired by recent developments in language modeling task. The suggested solution is based on a multi-task learning approach. We incorporate part of speech tags as input for the network. Part of speech tags to be yielded by some state of the art tagger and also ask the network to produce those tags in addition to the main named entity recognition tags. This way knowledge distillation from a strong part of speech tagger to our smaller network is happening. We hypothesize that designing neural network architecture in this way improves the generalizability of the system and provide arguments to support this statement.


2018 ◽  
Vol 10 (12) ◽  
pp. 123 ◽  
Author(s):  
Mohammed Ali ◽  
Guanzheng Tan ◽  
Aamir Hussain

Recurrent neural network (RNN) has achieved remarkable success in sequence labeling tasks with memory requirement. RNN can remember previous information of a sequence and can thus be used to solve natural language processing (NLP) tasks. Named entity recognition (NER) is a common task of NLP and can be considered a classification problem. We propose a bidirectional long short-term memory (LSTM) model for this entity recognition task of the Arabic text. The LSTM network can process sequences and relate to each part of it, which makes it useful for the NER task. Moreover, we use pre-trained word embedding to train the inputs that are fed into the LSTM network. The proposed model is evaluated on a popular dataset called “ANERcorp.” Experimental results show that the model with word embedding achieves a high F-score measure of approximately 88.01%.


Author(s):  
Bodhvi Gaur ◽  
Gurpreet Singh Saluja ◽  
Hamsa Bharathi Sivakumar ◽  
Sanjay Singh

Abstract A job seeker’s resume contains several sections, including educational qualifications. Educational qualifications capture the knowledge and skills relevant to the job. Machine processing of the education sections of resumes has been a difficult task. In this paper, we attempt to identify educational institutions’ names and degrees from a resume’s education section. Usually, a significant amount of annotated data is required for neural network-based named entity recognition techniques. A semi-supervised approach is used to overcome the lack of large annotated data. We trained a deep neural network model on an initial (seed) set of resume education sections. This model is used to predict entities of unlabeled education sections and is rectified using a correction module. The education sections containing the rectified entities are augmented to the seed set. The updated seed set is used for retraining, leading to better accuracy than the previously trained model. This way, it can provide a high overall accuracy without the need of large annotated data. Our model has achieved an accuracy of 92.06% on the named entity recognition task.


2020 ◽  
Vol 32 (20) ◽  
pp. 16191-16203
Author(s):  
Richa Sharma ◽  
Sudha Morwal ◽  
Basant Agarwal ◽  
Ramesh Chandra ◽  
Mohammad S. Khan

Sign in / Sign up

Export Citation Format

Share Document