The Algorithms for Word Segmentation and Named Entity Recognition of Chinese Medical Records

Author(s):  
Yuan-Nong Ye ◽  
Liu-Feng Zheng ◽  
Meng-Ya Huang ◽  
Tao Liu ◽  
Zhu Zeng
2021 ◽  
pp. 1-13
Author(s):  
Xia Li ◽  
Qinghua Wen ◽  
Zengtao Jiao ◽  
Jiangtao Zhang

Abstract The China Conference on Knowledge Graph and Semantic Computing (CCKS) 2020 Evaluation Task 3 presented clinical named entity recognition and event extraction for the Chinese electronic medical records. Two annotated data sets and some other additional resources for these two subtasks were provided for participators. This evaluation competition attracted 354 teams and 46 of them successfully submitted the valid results. The pre-trained language models are widely applied in this evaluation task. Data argumentation and external resources are also helpful.


2019 ◽  
Vol 9 (18) ◽  
pp. 3658 ◽  
Author(s):  
Jianliang Yang ◽  
Yuenan Liu ◽  
Minghui Qian ◽  
Chenghua Guan ◽  
Xiangfei Yuan

Clinical named entity recognition is an essential task for humans to analyze large-scale electronic medical records efficiently. Traditional rule-based solutions need considerable human effort to build rules and dictionaries; machine learning-based solutions need laborious feature engineering. For the moment, deep learning solutions like Long Short-term Memory with Conditional Random Field (LSTM–CRF) achieved considerable performance in many datasets. In this paper, we developed a multitask attention-based bidirectional LSTM–CRF (Att-biLSTM–CRF) model with pretrained Embeddings from Language Models (ELMo) in order to achieve better performance. In the multitask system, an additional task named entity discovery was designed to enhance the model’s perception of unknown entities. Experiments were conducted on the 2010 Informatics for Integrating Biology & the Bedside/Veterans Affairs (I2B2/VA) dataset. Experimental results show that our model outperforms the state-of-the-art solution both on the single model and ensemble model. Our work proposes an approach to improve the recall in the clinical named entity recognition task based on the multitask mechanism.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Lejun Gong ◽  
Zhifei Zhang ◽  
Shiqi Chen

Background. Clinical named entity recognition is the basic task of mining electronic medical records text, which are with some challenges containing the language features of Chinese electronic medical records text with many compound entities, serious missing sentence components, and unclear entity boundary. Moreover, the corpus of Chinese electronic medical records is difficult to obtain. Methods. Aiming at these characteristics of Chinese electronic medical records, this study proposed a Chinese clinical entity recognition model based on deep learning pretraining. The model used word embedding from domain corpus and fine-tuning of entity recognition model pretrained by relevant corpus. Then BiLSTM and Transformer are, respectively, used as feature extractors to identify four types of clinical entities including diseases, symptoms, drugs, and operations from the text of Chinese electronic medical records. Results. 75.06% Macro-P, 76.40% Macro-R, and 75.72% Macro-F1 aiming at test dataset could be achieved. These experiments show that the Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition effect. Conclusions. These experiments show that the proposed Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition performance.


2021 ◽  
Author(s):  
Jong-Kang Lee ◽  
Jue-Ni Huang ◽  
Kun-Ju Lin ◽  
Richard Tzong-Han Tsai

BACKGROUND Electronic records provide rich clinical information for biomedical text mining. However, a system developed on one hospital department may not generalize to other departments. Here, we use hospital medical records as a research data source and explore the heterogeneous problem posed by different hospital departments. OBJECTIVE We use MIMIC-III hospital medical records as the research data source. We collaborate with medical experts to annotate the data, with 328 records being included in analyses. Disease named entity recognition (NER), which helps medical experts in consolidating diagnoses, is undertaken as a case study. METHODS To compare heterogeneity of medical records across departments, we access text from multiple departments and employ the similarity metrics. We apply transfer learning to NER in different departments’ records and test the correlation between performance and similarity metrics. We use TF-IDF cosine similarity of the named entities as our similarity metric. We use three pretrained model on the disease NER task to valid the consistency of the result. RESULTS The disease NER dataset we release consists of 328 medical records from MIMIC-III, with 95629 sentences and 8884 disease mentions in total. The inter annotator agreement Cohen’s kappa coefficient is 0.86. Similarity metrics support that medical records from different departments are heterogeneous, ranges from 0.1004 to 0.3541 compare to Medical department. In the transfer learning task using the Medical department as the training set, F1 score performs in three pretrained models average from 0.847 to 0.863. F1 scores correlate with similarity metrics with Spearman’s coefficient of 0.4285. CONCLUSIONS We propose a disease NER dataset based on medical records from MIMIC-III and demonstrate the effectiveness of transfer learning using BERT. Similarity metrics reveal noticeable heterogeneity between department records. The deep learning-based transfer learning method demonstrates good ability to generalize across departments and achieve decent NER performance thus eliminates the concern that training material from one hospital might compromise model performance when applied to another. However, the model performance does not show high correlation to the departments’ similarity.


2005 ◽  
Vol 31 (4) ◽  
pp. 531-574 ◽  
Author(s):  
Jianfeng Gao ◽  
Mu Li ◽  
Chang-Ning Huang ◽  
Andi Wu

This article presents a pragmatic approach to Chinese word segmentation. It differs from most previous approaches mainly in three respects. First, while theoretical linguists have defined Chinese words using various linguistic criteria, Chinese words in this study are defined pragmatically as segmentation units whose definition depends on how they are used and processed in realistic computer applications. Second, we propose a pragmatic mathematical framework in which segmenting known words and detecting unknown words of different types (i.e., morphologically derived words, factoids, named entities, and other unlisted words) can be performed simultaneously in a unified way. These tasks are usually conducted separately in other systems. Finally, we do not assume the existence of a universal word segmentation standard that is application-independent. Instead, we argue for the necessity of multiple segmentation standards due to the pragmatic fact that different natural language processing applications might require different granularities of Chinese words. These pragmatic approaches have been implemented in an adaptive Chinese word segmenter, called MSRSeg, which will be described in detail. It consists of two components: (1) a generic segmenter that is based on the framework of linear mixture models and provides a unified approach to the five fundamental features of word-level Chinese language processing: lexicon word processing, morphological analysis, factoid detection, named entity recognition, and new word identification; and (2) a set of output adaptors for adapting the output of (1) to different application-specific standards. Evaluation on five test sets with different standards shows that the adaptive system achieves state-of-the-art performance on all the test sets.


Sign in / Sign up

Export Citation Format

Share Document