scholarly journals Predicting the Mortality and Readmission of In-Hospital Cardiac Arrest Patients With Electronic Health Records: A Machine Learning Approach (Preprint)

2021 ◽  
Author(s):  
Chien-Yu Chi ◽  
Shuang Ao ◽  
Adrian Winkler ◽  
Kuan-Chun Fu ◽  
Jie Xu ◽  
...  

BACKGROUND In-hospital cardiac arrest (IHCA) is associated with high mortality and health care costs in the recovery phase. Predicting adverse outcome events, including readmission, improves the chance for appropriate interventions and reduces health care costs. However, studies related to the early prediction of adverse events of IHCA survivors are rare. Therefore, we used a deep learning model for prediction in this study. OBJECTIVE This study aimed to demonstrate that with the proper data set and learning strategies, we can predict the 30-day mortality and readmission of IHCA survivors based on their historical claims. METHODS National Health Insurance Research Database claims data, including 168,693 patients who had experienced IHCA at least once and 1,569,478 clinical records, were obtained to generate a data set for outcome prediction. We predicted the 30-day mortality/readmission after each current record (ALL-mortality/ALL-readmission) and 30-day mortality/readmission after IHCA (cardiac arrest [CA]-mortality/CA-readmission). We developed a hierarchical vectorizer (HVec) deep learning model to extract patients’ information and predict mortality and readmission. To embed the textual medical concepts of the clinical records into our deep learning model, we used Text2Node to compute the distributed representations of all medical concept codes as a 128-dimensional vector. Along with the patient’s demographic information, our novel HVec model generated embedding vectors to hierarchically describe the health status at the record-level and patient-level. Multitask learning involving two main tasks and auxiliary tasks was proposed. As CA-mortality and CA-readmission were rare, person upsampling of patients with CA and weighting of CA records were used to improve prediction performance. RESULTS With the multitask learning setting in the model learning process, we achieved an area under the receiver operating characteristic of 0.752 for CA-mortality, 0.711 for ALL-mortality, 0.852 for CA-readmission, and 0.889 for ALL-readmission. The area under the receiver operating characteristic was improved to 0.808 for CA-mortality and 0.862 for CA-readmission after solving the extremely imbalanced issue for CA-mortality/CA-readmission by upsampling and weighting. CONCLUSIONS This study demonstrated the potential of predicting future outcomes for IHCA survivors by machine learning. The results showed that our proposed approach could effectively alleviate data imbalance problems and train a better model for outcome prediction.

Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Jose M. Castillo T. ◽  
Muhammad Arif ◽  
Martijn P. A. Starmans ◽  
Wiro J. Niessen ◽  
Chris H. Bangma ◽  
...  

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.


2020 ◽  
Vol 39 (10) ◽  
pp. 734-741
Author(s):  
Sébastien Guillon ◽  
Frédéric Joncour ◽  
Pierre-Emmanuel Barrallon ◽  
Laurent Castanié

We propose new metrics to measure the performance of a deep learning model applied to seismic interpretation tasks such as fault and horizon extraction. Faults and horizons are thin geologic boundaries (1 pixel thick on the image) for which a small prediction error could lead to inappropriately large variations in common metrics (precision, recall, and intersection over union). Through two examples, we show how classical metrics could fail to indicate the true quality of fault or horizon extraction. Measuring the accuracy of reconstruction of thin objects or boundaries requires introducing a tolerance distance between ground truth and prediction images to manage the uncertainties inherent in their delineation. We therefore adapt our metrics by introducing a tolerance function and illustrate their ability to manage uncertainties in seismic interpretation. We compare classical and new metrics through different examples and demonstrate the robustness of our metrics. Finally, we show on a 3D West African data set how our metrics are used to tune an optimal deep learning model.


Cancers ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1579 ◽  
Author(s):  
Muyi Sun ◽  
Wei Zhou ◽  
Xingqun Qi ◽  
Guanhong Zhang ◽  
Leonard Girnita ◽  
...  

Uveal melanoma is the most common primary intraocular malignancy in adults, with nearly half of all patients eventually developing metastases, which are invariably fatal. Manual assessment of the level of expression of the tumor suppressor BRCA1-associated protein 1 (BAP1) in tumor cell nuclei can identify patients with a high risk of developing metastases, but may suffer from poor reproducibility. In this study, we verified whether artificial intelligence could predict manual assessments of BAP1 expression in 47 enucleated eyes with uveal melanoma, collected from one European and one American referral center. Digitally scanned pathology slides were divided into 8176 patches, each with a size of 256 × 256 pixels. These were in turn divided into a training cohort of 6800 patches and a validation cohort of 1376 patches. A densely-connected classification network based on deep learning was then applied to each patch. This achieved a sensitivity of 97.1%, a specificity of 98.1%, an overall diagnostic accuracy of 97.1%, and an F1-score of 97.8% for the prediction of BAP1 expression in individual high resolution patches, and slightly less with lower resolution. The area under the receiver operating characteristic (ROC) curves of the deep learning model achieved an average of 0.99. On a full tumor level, our network classified all 47 tumors identically with an ophthalmic pathologist. We conclude that this deep learning model provides an accurate and reproducible method for the prediction of BAP1 expression in uveal melanoma.


2020 ◽  
Author(s):  
Sebastian Bomberg ◽  
Neha Goel

<p>The presented work focuses on disaster risk management of cities which are prone to natural hazards. Based on aerial imagery captured by drones of regions in Caribbean islands, we show how to process and automatically identify roof material of individual structures using a deep learning model. Deep learning refers to a machine learning technique using deep artificial neural networks. Unlike other techniques, deep learning does not necessarily require feature engineering but may process raw data directly. The outcome of this assessment can be used for steering risk mitigations measures, creating risk hazard maps or advising municipal bodies or help organizations on investing their resources in rebuilding reinforcements. Data at hand consists of images in BigTIFF format and GeoJSON files including the building footprint, unique building ID and roof material labels. We demonstrate how to use MATLAB and its toolboxes for processing large image files that do not fit in computer memory. Based on this, we perform the training of a deep learning model to classify roof material present in the images. We achieve this by subjecting a pretrained ResNet-18 neural network to transfer learning. Training is further accelerated by means of GPU computing. The accuracy computed from a validation data set achieved by this baseline model is 74%. Further tuning of hyperparameters is expected to improve accuracy significantly.</p>


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jonathan Stubblefield ◽  
Mitchell Hervert ◽  
Jason L. Causey ◽  
Jake A. Qualls ◽  
Wei Dong ◽  
...  

AbstractOne of the challenges with urgent evaluation of patients with acute respiratory distress syndrome (ARDS) in the emergency room (ER) is distinguishing between cardiac vs infectious etiologies for their pulmonary findings. We conducted a retrospective study with the collected data of 171 ER patients. ER patient classification for cardiac and infection causes was evaluated with clinical data and chest X-ray image data. We show that a deep-learning model trained with an external image data set can be used to extract image features and improve the classification accuracy of a data set that does not contain enough image data to train a deep-learning model. An analysis of clinical feature importance was performed to identify the most important clinical features for ER patient classification. The current model is publicly available with an interface at the web link: http://nbttranslationalresearch.org/.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1227
Author(s):  
Seung-Taek Oh ◽  
Deog-Hyeon Ga ◽  
Jae-Hyun Lim

Ultraviolet rays are closely related with human health and, recently, optimum exposure to the UV rays has been recommended, with growing importance being placed on correct UV information. However, many countries provide UV information services at a local level, which makes it impossible for individuals to acquire user-based, accurate UV information unless individuals operate UV measurement devices with expertise on the relevant field for interpretation of the measurement results. There is a limit in measuring ultraviolet rays’ information by the users at their respective locations. Research about how to utilize mobile devices such as smartphones to overcome such limitation is also lacking. This paper proposes a mobile deep learning system that calculates UVI based on the illuminance values at the user’s location obtained with mobile devices’ help. The proposed method analyzed the correlation between illuminance and UVI based on the natural light DB collected through the actual measurements, and the deep learning model’s data set was extracted. After the selection of the input variables to calculate the correct UVI, the deep learning model based on the TensorFlow set with the optimum number of layers and number of nodes was designed and implemented, and learning was executed via the data set. After the data set was converted to the mobile deep learning model to operate under the mobile environment, the converted data were loaded on the mobile device. The proposed method enabled providing UV information at the user’s location through a mobile device on which the illuminance sensors were loaded even in the environment without UVI measuring equipment. The comparison of the experiment results with the reference device (spectrometer) proved that the proposed method could provide UV information with an accuracy of 90–95% in the summers, as well as in winters.


2022 ◽  
Vol 13 (2) ◽  
pp. 1-20
Author(s):  
Luo He ◽  
Hongyan Liu ◽  
Yinghui Yang ◽  
Bei Wang

We develop a deep learning model based on Long Short-term Memory (LSTM) to predict blood pressure based on a unique data set collected from physical examination centers capturing comprehensive multi-year physical examination and lab results. In the Multi-attention Collaborative Deep Learning model (MAC-LSTM) we developed for this type of data, we incorporate three types of attention to generate more explainable and accurate results. In addition, we leverage information from similar users to enhance the predictive power of the model due to the challenges with short examination history. Our model significantly reduces predictive errors compared to several state-of-the-art baseline models. Experimental results not only demonstrate our model’s superiority but also provide us with new insights about factors influencing blood pressure. Our data is collected in a natural setting instead of a setting designed specifically to study blood pressure, and the physical examination items used to predict blood pressure are common items included in regular physical examinations for all the users. Therefore, our blood pressure prediction results can be easily used in an alert system for patients and doctors to plan prevention or intervention. The same approach can be used to predict other health-related indexes such as BMI.


Author(s):  
Hidir Selcuk Nogay ◽  
Tahir Cetin Akinci ◽  
Musa Yilmaz

AbstractCeramic materials are an indispensable part of our lives. Today, ceramic materials are mainly used in construction and kitchenware production. The fact that some deformations cannot be seen with the naked eye in the ceramic industry leads to a loss of time in the detection of deformations in the products. Delays that may occur in the elimination of deformations and in the planning of the production process cause the products with deformation to be excessive, which adversely affects the quality. In this study, a deep learning model based on acoustic noise data and transfer learning techniques was designed to detect cracks in ceramic plates. In order to create a data set, noise curves were obtained by applying the same magnitude impact to the ceramic experiment plates by impact pendulum. For experimental application, ceramic plates with three invisible cracks and one undamaged ceramic plate were used. The deep learning model was trained and tested for crack detection in ceramic plates by the data set obtained from the noise graphs. As a result, 99.50% accuracy was achieved with the deep learning model based on acoustic noise.


2020 ◽  
Author(s):  
Hyung Jun Park ◽  
Dae Yon Jung ◽  
Wonjun Ji ◽  
Chang-Min Choi

BACKGROUND Detecting bacteremia among surgical in-patients is more obscure than other patients due to the inflammatory condition caused by the surgery. The previous criteria such as systemic inflammatory response syndrome or Sepsis-3 are not available for use in general wards, and thus, many clinicians usually rely on practical senses to diagnose postoperative infection. OBJECTIVE This study aims to evaluate the performance of continuous monitoring with a deep learning model for early detection of bacteremia for surgical in-patients in the general ward and the intensive care unit (ICU). METHODS In this retrospective cohort study, we included 36,023 consecutive patients who underwent general surgery between October and December 2017 at a tertiary referral hospital in South Korea. The primary outcome was the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) for detecting bacteremia by the deep learning model, and the secondary outcome was the feature explainability of the model by occlusion analysis. RESULTS Out of the 36,023 patients in the data set, 720 cases of bacteremia were included. Our deep learning–based model showed an AUROC of 0.97 (95% CI 0.974-0.981) and an AUPRC of 0.17 (95% CI 0.147-0.203) for detecting bacteremia in surgical in-patients. For predicting bacteremia within the previous 24-hour period, the AUROC and AUPRC values were 0.93 and 0.15, respectively. Occlusion analysis showed that vital signs and laboratory measurements (eg, kidney function test and white blood cell group) were the most important variables for detecting bacteremia. CONCLUSIONS A deep learning model based on time series electronic health records data had a high detective ability for bacteremia for surgical in-patients in the general ward and the ICU. The model may be able to assist clinicians in evaluating infection among in-patients, ordering blood cultures, and prescribing antibiotics with real-time monitoring.


2020 ◽  
Author(s):  
Rui Cao ◽  
Fan Yang ◽  
Si-Cong Ma ◽  
Li Liu ◽  
Yan Li ◽  
...  

ABSTRACTBackgroundMicrosatellite instability (MSI) is a negative prognostic factor for colorectal cancer (CRC) and can be used as a predictor of success for immunotherapy in pan-cancer. However, current MSI identification methods are not available for all patients. We propose an ensemble multiple instance learning (MIL)-based deep learning model to predict MSI status directly from histopathology images.DesignTwo cohorts of patients were collected, including 429 from The Cancer Genome Atlas (TCGA-COAD) and 785 from a self-collected Asian data set (Asian-CRC). The initial model was developed and validated in TCGA-COAD, and then generalized in Asian-CRC through transfer learning. The pathological signatures extracted from the model are associated with genotypes for model interpretation.ResultsA model called Ensembled Patch Likelihood Aggregation (EPLA) was developed in the TCGA-COAD training set based on two consecutive stages: patch-level prediction and WSI-level prediction. The EPLA model achieved an area-under-the -curve (AUC) of 0.8848 in the TCGA-COAD test set, which outperformed the state-of-the-art approach, and an AUC of 0.8504 in the Asian-CRC after transfer learning. Furthermore, the five pathological imaging signatures identified using the model are associated with genomic and transcriptomic profiles, which makes the MIL model interpretable. Results show that our model recognizes pathological signatures related to mutation burden, DNA repair pathways, and immunity.ConclusionOur MIL-based deep learning model can effectively predict MSI from histopathology images and are transferable to a new patient cohort. The interpretability of our model by association with genomic and transcriptomic biomarkers lays the foundation for prospective clinical research.


Sign in / Sign up

Export Citation Format

Share Document