Faculty Opinions recommendation of Detecting Evidence of Intra-abdominal Surgical Site Infections from Radiology Reports Using Natural Language Processing.

Author(s):  
Martin Krallinger
CHEST Journal ◽  
2021 ◽  
Author(s):  
Chengyi Zheng ◽  
Brian Z. Huang ◽  
Andranik A. Agazaryan ◽  
Beth Creekmur ◽  
Thearis Osuj ◽  
...  

2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Nithin Kolanu ◽  
A Shane Brown ◽  
Amanda Beech ◽  
Jacqueline R. Center ◽  
Christopher P. White

2020 ◽  
Vol 33 (5) ◽  
pp. 1194-1201
Author(s):  
Andrew L. Callen ◽  
Sara M. Dupont ◽  
Adi Price ◽  
Ben Laguna ◽  
David McCoy ◽  
...  

2008 ◽  
Vol 5 (3) ◽  
pp. 197-204 ◽  
Author(s):  
Pragya A. Dang ◽  
Mannudeep K. Kalra ◽  
Michael A. Blake ◽  
Thomas J. Schultz ◽  
Markus Stout ◽  
...  

2021 ◽  
Author(s):  
Babak Afshin-Pour ◽  
Michael Qiu ◽  
Shahrzad Hosseini ◽  
Molly Stewart ◽  
Jan Horsky ◽  
...  

ABSTRACTDespite the high morbidity and mortality associated with Acute Respiratory Distress Syndrome (ARDS), discrimination of ARDS from other causes of acute respiratory failure remains challenging, particularly in the first 24 hours of mechanical ventilation. Delay in ARDS identification prevents lung protective strategies from being initiated and delays clinical trial enrolment and quality improvement interventions. Medical records from 1,263 ICU-admitted, mechanically ventilated patients at Northwell Health were retrospectively examined by a clinical team who assigned each patient a diagnosis of “ARDS” or “non-ARDS” (e.g., pulmonary edema). We then applied an iterative pre-processing and machine learning framework to construct a model that would discriminate ARDS versus non-ARDS, and examined features informative in the patient classification process. Data made available to the model included patient demographics, laboratory test results from before the initiation of mechanical ventilation, and features extracted by natural language processing of radiology reports. The resulting model discriminated well between ARDS and non-ARDS causes of respiratory failure (AUC=0.85, 89% precision at 20% recall), and highlighted features unique among ARDS patients, and among and the subset of ARDS patients who would not recover. Importantly, models built using both clinical notes and laboratory test results out-performed models built using either data source alone, akin to the retrospective clinician-based diagnostic process. This work demonstrates the feasibility of using readily available EHR data to discriminate ARDS patients prospectively in a real-world setting at a critical time in their care and highlights novel patient characteristics indicative of ARDS.


2020 ◽  
Author(s):  
Amy Y X Yu ◽  
Zhongyu A Liu ◽  
Chloe Pou-Prom ◽  
Kaitlyn Lopes ◽  
Moira K Kapral ◽  
...  

BACKGROUND Diagnostic neurovascular imaging data are important in stroke research, but obtaining these data typically requires laborious manual chart reviews. OBJECTIVE We aimed to determine the accuracy of a natural language processing (NLP) approach to extract information on the presence and location of vascular occlusions as well as other stroke-related attributes based on free-text reports. METHODS From the full reports of 1320 consecutive computed tomography (CT), CT angiography, and CT perfusion scans of the head and neck performed at a tertiary stroke center between October 2017 and January 2019, we manually extracted data on the presence of proximal large vessel occlusion (primary outcome), as well as distal vessel occlusion, ischemia, hemorrhage, Alberta stroke program early CT score (ASPECTS), and collateral status (secondary outcomes). Reports were randomly split into training (n=921) and validation (n=399) sets, and attributes were extracted using rule-based NLP. We reported the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and the overall accuracy of the NLP approach relative to the manually extracted data. RESULTS The overall prevalence of large vessel occlusion was 12.2%. In the training sample, the NLP approach identified this attribute with an overall accuracy of 97.3% (95.5% sensitivity, 98.1% specificity, 84.1% PPV, and 99.4% NPV). In the validation set, the overall accuracy was 95.2% (90.0% sensitivity, 97.4% specificity, 76.3% PPV, and 98.5% NPV). The accuracy of identifying distal or basilar occlusion as well as hemorrhage was also high, but there were limitations in identifying cerebral ischemia, ASPECTS, and collateral status. CONCLUSIONS NLP may improve the efficiency of large-scale imaging data collection for stroke surveillance and research.


2021 ◽  
Author(s):  
Jacob Johnson ◽  
Kaneel Senevirathne ◽  
Lawrence Ngo

Here, we developed and validated a highly generalizable natural language processing algorithm based on deep-learning. Our algorithm was trained and tested on a highly diverse dataset from over 2,000 hospital sites and 500 radiologists. The resulting algorithm achieved an AUROC of 0.96 for the presence or absence of liver lesions while achieving a specificity of 0.99 and a sensitivity of 0.6.


Sign in / Sign up

Export Citation Format

Share Document