Overview of the NLPCC 2015 Shared Task: Entity Recognition and Linking in Search Queries

Author(s):  
Yansong Feng ◽  
Zhe Han ◽  
Kun Zhang
Author(s):  
Rodrigo Agerri ◽  
German Rigau

We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empiricalexperimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.


Author(s):  
Jinwei Yuan ◽  
Yan Yang ◽  
Zhen Jia ◽  
Hongfeng Yin ◽  
Junfu Huang ◽  
...  

2020 ◽  
Author(s):  
Christian Druckenbrodt ◽  
Saber Akhondi ◽  
Karin Verspoor ◽  
Zenan Zhai ◽  
Camilo Thorne ◽  
...  

Author(s):  
V. A. Ivanin ◽  
◽  
E. L. Artemova ◽  
T. V. Batura ◽  
V. V. Ivanov ◽  
...  

In this paper, we present a shared task on core information extraction problems, named entity recognition and relation extraction. In contrast to popular shared tasks on related problems, we try to move away from strictly academic rigor and rather model a business case. As a source for textual data we choose the corpus of Russian strategic documents, which we annotated according to our own annotation scheme. To speed up the annotation process, we exploit various active learning techniques. In total we ended up with more than two hundred annotated documents. Thus we managed to create a high-quality data set in short time. The shared task consisted of three tracks, devoted to 1) named entity recognition, 2) relation extraction and 3) joint named entity recognition and relation extraction. We provided with the annotated texts as well as a set of unannotated texts, which could of been used in any way to improve solutions. In the paper we overview and compare solutions, submitted by the shared task participants. We release both raw and annotated corpora along with annotation guidelines, evaluation scripts and results at https://github.com/dialogue-evaluation/RuREBus.


2020 ◽  
Vol 34 (05) ◽  
pp. 8480-8487
Author(s):  
Stephen Mayhew ◽  
Gupta Nitish ◽  
Dan Roth

Although modern named entity recognition (NER) systems show impressive performance on standard datasets, they perform poorly when presented with noisy data. In particular, capitalization is a strong signal for entities in many languages, and even state of the art models overfit to this feature, with drastically lower performance on uncapitalized text. In this work, we address the problem of robustness of NER systems in data with noisy or uncertain casing, using a pretraining objective that predicts casing in text, or a truecaser, leveraging unlabeled data. The pretrained truecaser is combined with a standard BiLSTM-CRF model for NER by appending output distributions to character embeddings. In experiments over several datasets of varying domain and casing quality, we show that our new model improves performance in uncased text, even adding value to uncased BERT embeddings. Our method achieves a new state of the art on the WNUT17 shared task dataset.


Author(s):  
V. A. Korzun ◽  

This paper provides results of participation in the Russian Relation Extraction for Business shared task (RuREBus) within DialogueEvaluation 2020. Our team took the first place among 5 other teams in Relation Extraction with Named Entities task. The experiments showed that the best model is based on R-BERT model. R-BERT achieved significant result in comparison with models based on Convolutional or Recurrent Neural Networks on the SemEval-2010 task 8 relational dataset. In order to adapt this model to RuREBus task we also added some modifications like negative sampling. In addition, we have tested other models for Relation Extraction and Named Entity Recognition tasks.


Author(s):  
Kecheng Zhan ◽  
Weihua Peng ◽  
Ying Xiong ◽  
Huhao Fu ◽  
Qingcai Chen ◽  
...  

BACKGROUND Family history (FH) information, including family members, side of family of family members, living status of family members, observations of family members, etc., plays a significant role in disease diagnosis and treatment. Family member information extraction aims to extract FH information from semi-structured/unstructured text in electronic health records (EHRs), which is a challenging task regarding named entity recognition (NER) and relation extraction (RE), where NE refers to family members, living status and observations, and relation refers to relations between family members and living status, and relations between family members and observations. OBJECTIVE This study aims to explore the ways to effectively extract family history information from clinical text. METHODS Inspired by dependency parsing, we design a novel graph-based schema to represent FH information and introduced deep biaffine attention to extract FH information in clinical text. In the deep biaffine attention model, we use CNN-BiLSTM (Convolutional Neural Network-Bidirectional Long Short Term Memory network) and BERT (Bidirectional Encoder Representation from Transformers) to encode input sentences, and deployed biaffine classifier to extract FH information. In addition, we also develop a post-processing module to adjust results. A system based on the proposed method was developed for the 2019 n2c2/OHNLP shared task track on FH information extraction, which includes two subtasks on entity recognition and relation extraction respectively. RESULTS We conduct experiments on the corpus provided by the 2019 n2c2/OHNLP shared task track on FH information extraction. Our system achieved the highest F1-scores of 0.8823 on subtask 1 and 0.7048 on subtask 2, respectively, new benchmark results on the 2019 n2c2/OHNLP corpus. CONCLUSIONS This study designed a novel Schema to represent FH information using graph and applied deep biaffine attention to extract FH information. Experimental results show the effectiveness of deep biaffine attention on FH information extraction.


Sign in / Sign up

Export Citation Format

Share Document