Disease Named Entity Recognition in Medical Records: Analysis of Transfer Learning between Different Hospital Departments (Preprint)

2021 ◽  
Author(s):  
Jong-Kang Lee ◽  
Jue-Ni Huang ◽  
Kun-Ju Lin ◽  
Richard Tzong-Han Tsai

BACKGROUND Electronic records provide rich clinical information for biomedical text mining. However, a system developed on one hospital department may not generalize to other departments. Here, we use hospital medical records as a research data source and explore the heterogeneous problem posed by different hospital departments. OBJECTIVE We use MIMIC-III hospital medical records as the research data source. We collaborate with medical experts to annotate the data, with 328 records being included in analyses. Disease named entity recognition (NER), which helps medical experts in consolidating diagnoses, is undertaken as a case study. METHODS To compare heterogeneity of medical records across departments, we access text from multiple departments and employ the similarity metrics. We apply transfer learning to NER in different departments’ records and test the correlation between performance and similarity metrics. We use TF-IDF cosine similarity of the named entities as our similarity metric. We use three pretrained model on the disease NER task to valid the consistency of the result. RESULTS The disease NER dataset we release consists of 328 medical records from MIMIC-III, with 95629 sentences and 8884 disease mentions in total. The inter annotator agreement Cohen’s kappa coefficient is 0.86. Similarity metrics support that medical records from different departments are heterogeneous, ranges from 0.1004 to 0.3541 compare to Medical department. In the transfer learning task using the Medical department as the training set, F1 score performs in three pretrained models average from 0.847 to 0.863. F1 scores correlate with similarity metrics with Spearman’s coefficient of 0.4285. CONCLUSIONS We propose a disease NER dataset based on medical records from MIMIC-III and demonstrate the effectiveness of transfer learning using BERT. Similarity metrics reveal noticeable heterogeneity between department records. The deep learning-based transfer learning method demonstrates good ability to generalize across departments and achieve decent NER performance thus eliminates the concern that training material from one hospital might compromise model performance when applied to another. However, the model performance does not show high correlation to the departments’ similarity.

PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246310
Author(s):  
Shang Gao ◽  
Olivera Kotevska ◽  
Alexandre Sorokine ◽  
J. Blair Christian

Named entity recognition (NER) is a key component of many scientific literature mining tasks, such as information retrieval, information extraction, and question answering; however, many modern approaches require large amounts of labeled training data in order to be effective. This severely limits the effectiveness of NER models in applications where expert annotations are difficult and expensive to obtain. In this work, we explore the effectiveness of transfer learning and semi-supervised self-training to improve the performance of NER models in biomedical settings with very limited labeled data (250-2000 labeled samples). We first pre-train a BiLSTM-CRF and a BERT model on a very large general biomedical NER corpus such as MedMentions or Semantic Medline, and then we fine-tune the model on a more specific target NER task that has very limited training data; finally, we apply semi-supervised self-training using unlabeled data to further boost model performance. We show that in NER tasks that focus on common biomedical entity types such as those in the Unified Medical Language System (UMLS), combining transfer learning with self-training enables a NER model such as a BiLSTM-CRF or BERT to obtain similar performance with the same model trained on 3x-8x the amount of labeled data. We further show that our approach can also boost performance in a low-resource application where entities types are more rare and not specifically covered in UMLS.


2021 ◽  
pp. 1-13
Author(s):  
Xia Li ◽  
Qinghua Wen ◽  
Zengtao Jiao ◽  
Jiangtao Zhang

Abstract The China Conference on Knowledge Graph and Semantic Computing (CCKS) 2020 Evaluation Task 3 presented clinical named entity recognition and event extraction for the Chinese electronic medical records. Two annotated data sets and some other additional resources for these two subtasks were provided for participators. This evaluation competition attracted 354 teams and 46 of them successfully submitted the valid results. The pre-trained language models are widely applied in this evaluation task. Data argumentation and external resources are also helpful.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Yuanhe Tian ◽  
Wang Shen ◽  
Yan Song ◽  
Fei Xia ◽  
Min He ◽  
...  

Abstract Background Biomedical named entity recognition (BioNER) is an important task for understanding biomedical texts, which can be challenging due to the lack of large-scale labeled training data and domain knowledge. To address the challenge, in addition to using powerful encoders (e.g., biLSTM and BioBERT), one possible method is to leverage extra knowledge that is easy to obtain. Previous studies have shown that auto-processed syntactic information can be a useful resource to improve model performance, but their approaches are limited to directly concatenating the embeddings of syntactic information to the input word embeddings. Therefore, such syntactic information is leveraged in an inflexible way, where inaccurate one may hurt model performance. Results In this paper, we propose BioKMNER, a BioNER model for biomedical texts with key-value memory networks (KVMN) to incorporate auto-processed syntactic information. We evaluate BioKMNER on six English biomedical datasets, where our method with KVMN outperforms the strong baseline method, namely, BioBERT, from the previous study on all datasets. Specifically, the F1 scores of our best performing model are 85.29% on BC2GM, 77.83% on JNLPBA, 94.22% on BC5CDR-chemical, 90.08% on NCBI-disease, 89.24% on LINNAEUS, and 76.33% on Species-800, where state-of-the-art performance is obtained on four of them (i.e., BC2GM, BC5CDR-chemical, NCBI-disease, and Species-800). Conclusion The experimental results on six English benchmark datasets demonstrate that auto-processed syntactic information can be a useful resource for BioNER and our method with KVMN can appropriately leverage such information to improve model performance.


2019 ◽  
Vol 9 (18) ◽  
pp. 3658 ◽  
Author(s):  
Jianliang Yang ◽  
Yuenan Liu ◽  
Minghui Qian ◽  
Chenghua Guan ◽  
Xiangfei Yuan

Clinical named entity recognition is an essential task for humans to analyze large-scale electronic medical records efficiently. Traditional rule-based solutions need considerable human effort to build rules and dictionaries; machine learning-based solutions need laborious feature engineering. For the moment, deep learning solutions like Long Short-term Memory with Conditional Random Field (LSTM–CRF) achieved considerable performance in many datasets. In this paper, we developed a multitask attention-based bidirectional LSTM–CRF (Att-biLSTM–CRF) model with pretrained Embeddings from Language Models (ELMo) in order to achieve better performance. In the multitask system, an additional task named entity discovery was designed to enhance the model’s perception of unknown entities. Experiments were conducted on the 2010 Informatics for Integrating Biology & the Bedside/Veterans Affairs (I2B2/VA) dataset. Experimental results show that our model outperforms the state-of-the-art solution both on the single model and ensemble model. Our work proposes an approach to improve the recall in the clinical named entity recognition task based on the multitask mechanism.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Lejun Gong ◽  
Zhifei Zhang ◽  
Shiqi Chen

Background. Clinical named entity recognition is the basic task of mining electronic medical records text, which are with some challenges containing the language features of Chinese electronic medical records text with many compound entities, serious missing sentence components, and unclear entity boundary. Moreover, the corpus of Chinese electronic medical records is difficult to obtain. Methods. Aiming at these characteristics of Chinese electronic medical records, this study proposed a Chinese clinical entity recognition model based on deep learning pretraining. The model used word embedding from domain corpus and fine-tuning of entity recognition model pretrained by relevant corpus. Then BiLSTM and Transformer are, respectively, used as feature extractors to identify four types of clinical entities including diseases, symptoms, drugs, and operations from the text of Chinese electronic medical records. Results. 75.06% Macro-P, 76.40% Macro-R, and 75.72% Macro-F1 aiming at test dataset could be achieved. These experiments show that the Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition effect. Conclusions. These experiments show that the proposed Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition performance.


Sign in / Sign up

Export Citation Format

Share Document