Span Classification Based Model for Clinical Concept Extraction

Author(s):  
Yongtao Tang ◽  
Jie Yu ◽  
Shasha Li ◽  
Bin Ji ◽  
Yusong Tan ◽  
...  
2019 ◽  
Author(s):  
Sarah Wiegreffe ◽  
Edward Choi ◽  
Sherry Yan ◽  
Jimeng Sun ◽  
Jacob Eisenstein

2012 ◽  
Vol 45 (1) ◽  
pp. 129-140 ◽  
Author(s):  
Siddhartha Jonnalagadda ◽  
Trevor Cohen ◽  
Stephen Wu ◽  
Graciela Gonzalez

2020 ◽  
Author(s):  
Yongtao Tang ◽  
Shasha Li ◽  
Bin Ji ◽  
Jie Yu ◽  
Yusong Tan ◽  
...  

Abstract Background Recently, how to structuralize electronic medical records (EMRs) has attracted considerable attention from researchers. Extracting clinical concepts from EMRs is a critical part of EMR structuralization. The performance of clinical concept extraction will directly affect the performance of the downstream tasks related to EMR structuralization. We propose a new modeling method based on candidate window classification, which is different from mainstream sequence labeling models, to improves the performance of clinical concept extraction tasks under strict standards by considering the overall semantics of the token sequence instead of the semantics of each token. We call this model as slide window model. MethodIn this paper, we comprehensively study the performance of the slide window model in clinical concept extraction tasks. We model the clinical concept extraction task as the task of classifying each candidate window, which was extracted by the slide window. The proposed model mainly consists of four parts. First, the pre-trained language model is used to generate the context-sensitive token representation. Second, a convolutional neural network (CNN) is used to generate all representation vector of the candidate windows in the sentence. Third, every candidate window is classified by a Softmax classifier to obtain concept type probability distribution. Finally, the knapsack algorithm is used as a post-process to maximize the sum of disjoint clinical concepts scores and filter the clinical concepts. Results Experiments show that the slide window model achieves the best micro-average F1 score(81.22%) on the corpora of the 2012 i2b2 NLP challenges and achieves 89.25% F1 score on the 2010 i2b2 NLP challenges under the strict standard. Furthermore, the performance of our approach is always better than the BiLSTM-CRF model and softmax classifier with the same pre-trained language model. ConclusionsThe slide window model shows a new modeling method for solving clinical concept extraction tasks. It models clinical concept extraction as a problem for classifying candidate windows and extracts clinical concepts by considering the semantics of the entire candidate window. Experiments show that this method of considering the overall semantics of the candidate window can improve the performance of clinical concept extraction tasks under strict standards.


2017 ◽  
Vol 106 ◽  
pp. 25-31 ◽  
Author(s):  
Mahnoosh Kholghi ◽  
Laurianne Sitbon ◽  
Guido Zuccon ◽  
Anthony Nguyen

2013 ◽  
Vol 4 (1) ◽  
pp. 3 ◽  
Author(s):  
Kavishwar B Wagholikar ◽  
Manabu Torii ◽  
Siddhartha R Jonnalagadda ◽  
Hongfang Liu

2020 ◽  
Vol 109 ◽  
pp. 103526 ◽  
Author(s):  
Sunyang Fu ◽  
David Chen ◽  
Huan He ◽  
Sijia Liu ◽  
Sungrim Moon ◽  
...  

2019 ◽  
Vol 26 (11) ◽  
pp. 1297-1304 ◽  
Author(s):  
Yuqi Si ◽  
Jingqi Wang ◽  
Hua Xu ◽  
Kirk Roberts

Abstract Objective Neural network–based representations (“embeddings”) have dramatically advanced natural language processing (NLP) tasks, including clinical NLP tasks such as concept extraction. Recently, however, more advanced embedding methods and representations (eg, ELMo, BERT) have further pushed the state of the art in NLP, yet there are no common best practices for how to integrate these representations into clinical tasks. The purpose of this study, then, is to explore the space of possible options in utilizing these new models for clinical concept extraction, including comparing these to traditional word embedding methods (word2vec, GloVe, fastText). Materials and Methods Both off-the-shelf, open-domain embeddings and pretrained clinical embeddings from MIMIC-III (Medical Information Mart for Intensive Care III) are evaluated. We explore a battery of embedding methods consisting of traditional word embeddings and contextual embeddings and compare these on 4 concept extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015. We also analyze the impact of the pretraining time of a large language model like ELMo or BERT on the extraction performance. Last, we present an intuitive way to understand the semantic information encoded by contextual embeddings. Results Contextual embeddings pretrained on a large clinical corpus achieves new state-of-the-art performances across all concept extraction tasks. The best-performing model outperforms all state-of-the-art methods with respective F1-measures of 90.25, 93.18 (partial), 80.74, and 81.65. Conclusions We demonstrate the potential of contextual embeddings through the state-of-the-art performance these methods achieve on clinical concept extraction. Additionally, we demonstrate that contextual embeddings encode valuable semantic information not accounted for in traditional word representations.


2020 ◽  
Author(s):  
Yongtao Tang ◽  
Jie Yu ◽  
Shasha Li ◽  
Bin Ji ◽  
Yusong Tan ◽  
...  

Abstract Background Recently, how to structuralize electronic medical records (EMRs) has attracted considerable attention from researchers. Extracting clinical concepts from EMRs is a critical part of EMR structuralization. The performance of clinical concept extraction will directly affect the performance of the downstream tasks related to EMR structuralization. We propose a new modeling method based on candidate window classification, which is different from mainstream sequence labeling models, to improves the performance of clinical concept extraction tasks under strict standards by considering the overall semantics of the token sequence instead of the semantics of each token. We call this model as slide window model. Method In this paper, we comprehensively study the performance of the slide window model in clinical concept extraction tasks. We model the clinical concept extraction task as the task of classifying each candidate window, which was extracted by the slide window. The proposed model mainly consists of four parts. First, the pre-trained language model is used to generate the context-sensitive token representation. Second, a convolutional neural network (CNN) is used to generate all representation vector of the candidate windows in the sentence. Third, every candidate window is classified by a Softmax classifier to obtain concept type probability distribution. Finally, the knapsack algorithm is used as a post-process to maximize the sum of disjoint clinical concepts scores and filter the clinical concepts. Results Experiments show that the slide window model achieves the best micro-average F1 score(81.22%) on the corpora of the 2012 i2b2 NLP challenges and achieves 89.25% F1 score on the 2010 i2b2 NLP challenges under the strict standard. Furthermore, the performance of our approach is always better than the BiLSTM-CRF model and softmax classifier with the same pre-trained language model.Conclusions The slide window model shows a new modeling method for solving clinical concept extraction tasks. It models clinical concept extraction as a problem for classifying candidate windows and extracts clinical concepts by considering the semantics of the entire candidate window. Experiments show that this method of considering the overall semantics of the candidate window can improve the performance of clinical concept extraction tasks under strict standards.


Sign in / Sign up

Export Citation Format

Share Document