scholarly journals Deep Learning for Identification of Acute Illness and Facial Cues of Illness

2021 ◽  
Vol 8 ◽  
Author(s):  
Castela Forte ◽  
Andrei Voinea ◽  
Malina Chichirau ◽  
Galiya Yeshmagambetova ◽  
Lea M. Albrecht ◽  
...  

Background: The inclusion of facial and bodily cues (clinical gestalt) in machine learning (ML) models improves the assessment of patients' health status, as shown in genetic syndromes and acute coronary syndrome. It is unknown if the inclusion of clinical gestalt improves ML-based classification of acutely ill patients. As in previous research in ML analysis of medical images, simulated or augmented data may be used to assess the usability of clinical gestalt.Objective: To assess whether a deep learning algorithm trained on a dataset of simulated and augmented facial photographs reflecting acutely ill patients can distinguish between healthy and LPS-infused, acutely ill individuals.Methods: Photographs from twenty-six volunteers whose facial features were manipulated to resemble a state of acute illness were used to extract features of illness and generate a synthetic dataset of acutely ill photographs, using a neural transfer convolutional neural network (NT-CNN) for data augmentation. Then, four distinct CNNs were trained on different parts of the facial photographs and concatenated into one final, stacked CNN which classified individuals as healthy or acutely ill. Finally, the stacked CNN was validated in an external dataset of volunteers injected with lipopolysaccharide (LPS).Results: In the external validation set, the four individual feature models distinguished acutely ill patients with sensitivities ranging from 10.5% (95% CI, 1.3–33.1% for the skin model) to 89.4% (66.9–98.7%, for the nose model). Specificity ranged from 42.1% (20.3–66.5%) for the nose model and 94.7% (73.9–99.9%) for skin. The stacked model combining all four facial features achieved an area under the receiver characteristic operating curve (AUROC) of 0.67 (0.62–0.71) and distinguished acutely ill patients with a sensitivity of 100% (82.35–100.00%) and specificity of 42.11% (20.25–66.50%).Conclusion: A deep learning algorithm trained on a synthetic, augmented dataset of facial photographs distinguished between healthy and simulated acutely ill individuals, demonstrating that synthetically generated data can be used to develop algorithms for health conditions in which large datasets are difficult to obtain. These results support the potential of facial feature analysis algorithms to support the diagnosis of acute illness.

Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2020 ◽  
pp. 2003061
Author(s):  
Ju Gang Nam ◽  
Minchul Kim ◽  
Jongchan Park ◽  
Eui Jin Hwang ◽  
Jong Hyuk Lee ◽  
...  

We aimed to develop a deep-learning algorithm detecting 10 common abnormalities (DLAD-10) on chest radiographs and to evaluate its impact in diagnostic accuracy, timeliness of reporting, and workflow efficacy.DLAD-10 was trained with 146 717 radiographs from 108 053 patients using a ResNet34-based neural network with lesion-specific channels for 10 common radiologic abnormalities (pneumothorax, mediastinal widening, pneumoperitoneum, nodule/mass, consolidation, pleural effusion, linear atelectasis, fibrosis, calcification, and cardiomegaly). For external validation, the performance of DLAD-10 on a same-day CT-confirmed dataset (normal:abnormal, 53:147) and an open-source dataset (PadChest; normal:abnormal, 339:334) was compared to that of three radiologists. Separate simulated reading tests were conducted on another dataset adjusted to real-world disease prevalence in the emergency department, consisting of four critical, 52 urgent, and 146 non-urgent cases. Six radiologists participated in the simulated reading sessions with and without DLAD-10.DLAD-10 exhibited areas under the receiver-operating characteristic curves (AUROCs) of 0.895–1.00 in the CT-confirmed dataset and 0.913–0.997 in the PadChest dataset. DLAD-10 correctly classified significantly more critical abnormalities (95.0% [57/60]) than pooled radiologists (84.4% [152/180]; p=0.01). In simulated reading tests for emergency department patients, pooled readers detected significantly more critical (70.8% [17/24] versus 29.2% [7/24]; p=0.006) and urgent (82.7% [258/312] versus 78.2% [244/312]; p=0.04) abnormalities when aided by DLAD-10. DLAD-10 assistance shortened the mean time-to-report critical and urgent radiographs (640.5±466.3 versus 3371.0±1352.5 s and 1840.3±1141.1 versus 2127.1±1468.2, respectively; p-values<0.01) and reduced the mean interpretation time (20.5±22.8 versus 23.5±23.7 s; p<0.001).DLAD-10 showed excellent performance, improving radiologists' performance and shortening the reporting time for critical and urgent cases.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Yoichiro Yamamoto ◽  
Toyonori Tsuzuki ◽  
Jun Akatsuka ◽  
Masao Ueki ◽  
Hiromu Morikawa ◽  
...  

AbstractDeep learning algorithms have been successfully used in medical image classification. In the next stage, the technology of acquiring explainable knowledge from medical images is highly desired. Here we show that deep learning algorithm enables automated acquisition of explainable features from diagnostic annotation-free histopathology images. We compare the prediction accuracy of prostate cancer recurrence using our algorithm-generated features with that of diagnosis by expert pathologists using established criteria on 13,188 whole-mount pathology images consisting of over 86 billion image patches. Our method not only reveals findings established by humans but also features that have not been recognized, showing higher accuracy than human in prognostic prediction. Combining both our algorithm-generated features and human-established criteria predicts the recurrence more accurately than using either method alone. We confirm robustness of our method using external validation datasets including 2276 pathology images. This study opens up fields of machine learning analysis for discovering uncharted knowledge.


2021 ◽  
Author(s):  
Dong Chuang Guo ◽  
Jun Gu ◽  
Jian He ◽  
Hai Rui Chu ◽  
Na Dong ◽  
...  

Abstract Background: Hematoma expansion is an independent predictor of patient outcome and mortality. The early diagnosis of hematoma expansion is crucial for selecting clinical treatment options This study aims to explore the value of a deep learning algorithm for the prediction of hematoma expansion from noncontrast Computed tomography(NCCT) scan through external validation.Methods: 102 NCCT images of Hypertensive intracerebral hemorrhage (HICH) patients diagnosed in our hospital were retrospectively reviewed. The initial Computed tomography (CT) scan images were evaluated by a commercial Artificial intelligence (AI) software using deep learning algorithm and radiologists respectively to predict hematoma expansion and the corresponding sensitivity and specificity of the two groups were calculated and compared, Pair-wise comparisons were conducted among gold standard hematoma expansion diagnosis time, AI software diagnosis time and doctors’ reading time.Results: Among 102 HICH patients, The sensitivity, specificity and accuracy of predicting hematoma expansion in the AI group were higher than those in the doctor group(80.0% vs 66.7%,73.6% vs 58.3%,75.5% vs 60.8%),with statistically significant difference (p<0.05).The AI diagnosis time (2.8 ± 0.3s) and the doctors’ diagnosis time (11.7 ± 0.3s) were both significantly shorter than the gold standard diagnosis time (14.5 ± 8.8h) (p <0.05), AI diagnosis time was significantly shorter than that of doctors (p<0.05).Conclusions: Deep learning algorithm could effectively predict hematoma expansion at an early stage from the initial CT scan images of HICH patients after onset with high sensitivity and specificity and greatly shortened diagnosis time, which provides a new, accurate, easy-to-use and fast method for the early prediction of hematoma expansion.


Author(s):  
Supreeth P. Shashikumar ◽  
Gabriel Wardi ◽  
Paulina Paul ◽  
Paulina Paul ◽  
Morgan Carlile ◽  
...  

ABSTRACTIMPORTANCEObjective and early identification of hospitalized patients, and particularly those with novel coronavirus disease 2019 (COVID-19), who may require mechanical ventilation is of great importance and may aid in delivering timely treatment.OBJECTIVETo develop, externally validate and prospectively test a transparent deep learning algorithm for predicting 24 hours in advance the need for mechanical ventilation in hospitalized patients and those with COVID-19.DESIGNObservational cohort studySETTINGTwo academic medical centers from January 01, 2016 to December 31, 2019 (Retrospective cohorts) and February 10, 2020 to May 4, 2020 (Prospective cohorts).PARTICIPANTSOver 31,000 admissions to the intensive care units (ICUs) at two hospitals. Additionally, 777 patients with COVID-19 patients were used for prospective validation. Patients who were placed on mechanical ventilation within four hours of their admission were excluded.MAIN OUTCOME(S) and MEASURE(S)Electronic health record (EHR) data were extracted on an hourly basis, and a set of 40 features were calculated and passed to an interpretable deep-learning algorithm to predict the future need for mechanical ventilation 24 hours in advance. Additionally, commonly used clinical criteria (based on heart rate, oxygen saturation, respiratory rate, FiO2 and pH) was used to assess future need for mechanical ventilation. Performance of the algorithms were evaluated using the area under receiver-operating characteristic curve (AUC), sensitivity, specificity and positive predictive value.RESULTSAfter applying exclusion criteria, the external validation cohort included 3,888 general ICU and 402 COVID-19 patients. The performance of the model (AUC) with a 24-hour prediction horizon at the validation site was 0.882 for the general ICU population and 0.918 for patients with COVID-19. In comparison, commonly used clinical criteria and the ROX score achieved AUCs in the range of 0.773 – 0.782 and 0.768 – 0.810 for the general ICU population and patients with COVID-19, respectively.CONCLUSIONS and RELEVANCEA generalizable and transparent deep-learning algorithm improves on traditional clinical criteria to predict the need for mechanical ventilation in hospitalized patients, including those with COVID-19. Such an algorithm may help clinicians with optimizing timing of tracheal intubation, better allocation of mechanical ventilation resources and staff, and improve patient care.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoungwoo Jang ◽  
Jae Ho Choi ◽  
Namkug Kim ◽  
Jae Suk Chang ◽  
Pil Whan Yoon ◽  
...  

AbstractDespite being the gold standard for diagnosis of osteoporosis, dual-energy X-ray absorptiometry (DXA) could not be widely used as a screening tool for osteoporosis. This study aimed to predict osteoporosis via simple hip radiography using deep learning algorithm. A total of 1001 datasets of proximal femur DXA with matched same-side cropped simple hip bone radiographic images of female patients aged ≥ 55 years were collected. Of these, 504 patients had osteoporosis (T-score ≤ − 2.5), and 497 patients did not have osteoporosis. The 1001 images were randomly divided into three sets: 800 images for the training, 100 images for the validation, and 101 images for the test. Based on VGG16 equipped with nonlocal neural network, we developed a deep neural network (DNN) model. We calculated the confusion matrix and evaluated the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We drew the receiver operating characteristic (ROC) curve. A gradient-based class activation map (Grad-CAM) overlapping the original image was also used to visualize the model performance. Additionally, we performed external validation using 117 datasets. Our final DNN model showed an overall accuracy of 81.2%, sensitivity of 91.1%, and specificity of 68.9%. The PPV was 78.5%, and the NPV was 86.1%. The area under the ROC curve value was 0.867, indicating a reasonable performance for screening osteoporosis by simple hip radiography. The external validation set confirmed a model performance with an overall accuracy of 71.8% and an AUC value of 0.700. All Grad-CAM results from both internal and external validation sets appropriately matched the proximal femur cortex and trabecular patterns of the radiographs. The DNN model could be considered as one of the useful screening tools for easy prediction of osteoporosis in the real-world clinical setting.


2021 ◽  
Author(s):  
Yanjun Ma ◽  
Jianhao Xiong ◽  
Yidan Zhu ◽  
Zongyuan Ge ◽  
Rong Hua ◽  
...  

Background Ischemic cardiovascular diseases (ICVD) risk predict models are valuable but limited by its requirement for multidimensional medical information including that from blood drawing. A convenient and affordable alternative is in demand. Objectives To develop and validate a deep learning algorithm to predict 10-year ICVD risk using retinal fundus photographs in Chinese population. Methods We firstly labeled fundus photographs with natural logarithms of ICVD risk estimated by a previously validated 10-year Chinese ICVD risk prediction model for 390,947 adults randomly selected (95%) from a health checkup dataset. An algorithm using convolutional neural network was then developed to predict the estimated 10-year ICVD risk by fundus images. The algorithm was validated using both internal dataset (the other 5%) and external dataset from an independent source (sample size = 1,309). Adjusted R2 and area under the receiver operating characteristic curve (AUC) were used to evaluate the goodness of fit. Results The adjusted R2 between natural logarithms of the predicted and calculated ICVD risks was 0.876 and 0.638 in the internal and external validations, respectively. For detecting ICVD risk ≥ 5% and ≥ 7.5%, the algorithm achieved an AUC of 0.971 (95% CI: 0.967 to 0.975) and 0.976 (95% CI: 0.973 to 0.980) in internal validation, and 0.859 (95% CI: 0.822 to 0.895) and 0.876 (95% CI: 0.816 to 0.837) in external validation. Conclusions The deep learning algorithm developed in the study using fundus photographs to predict 10-year ICVD risk in Chinese population had fairly good capability in predicting the risk and may have values to be widely promoted considering its advances in easy use and lower cost. Further studies with long term follow up are warranted. Keywords Deep learning, Ischemic cardiovascular diseases, risk prediction.


2021 ◽  
Author(s):  
Rong Hua ◽  
Jianhao Xiong ◽  
Gail Li ◽  
Yidan Zhu ◽  
Zongyuan Ge ◽  
...  

AbstractImportanceThe Cardiovascular Risk Factors, Aging, and Incidence of Dementia (CAIDE) dementia risk score is a recognized tool for dementia risk stratification. However, its application is limited due to the requirements for multidimensional information and fasting blood draw. Consequently, effective, convenient and noninvasive tool for screening individuals with high dementia risk in large population-based settings is urgently needed.ObjectiveTo develop and validate a deep learning algorithm using retinal fundus photographs for estimating the CAIDE dementia risk score and identifying individuals with high dementia risk.DesignA deep learning algorithm trained via fundus photographs was developed, validated internally and externally with cross-sectional design.SettingPopulation-based.ParticipantsA health check-up population with 271,864 adults were randomized into a development dataset (95%) and an internal validation dataset (5%). The external validation used data from the Beijing Research on Ageing and Vessel (BRAVE) with 1,512 individuals.ExposuresThe estimated CAIDE dementia risk score generated from the algorithm.Main Outcome and MeasureThe algorithm’s performance for identifying individuals with high dementia risk was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval (CI).ResultsThe study involved 258,305 participants (mean aged 42.1 ± 13.4 years, men: 52.7%) in development, 13,559 (mean aged 41.2 ± 13.3 years, men: 52.5%) in internal validation, and 1,512 (mean aged 59.8 ± 7.3 years, men: 37.1%) in external validation. The adjusted coefficient of determination (R2) between the estimated and actual CAIDE dementia risk score was 0.822 in the internal and 0.300 in the external validations, respectively. The algorithm achieved an AUC of 0.931 (95%CI, 0.922–0.939) in the internal validation group and 0.782 (95%CI, 0.749–0.815) in the external group. Besides, the estimated CAIDE dementia risk score was significantly associated with both comprehensive cognitive function and specific cognitive domains.Conclusions and RelevanceThe present study demonstrated that the deep learning algorithm trained via fundus photographs could well identify individuals with high dementia risk in a population-based setting. Our findings suggest that fundus photography may be utilized as a noninvasive and more expedient method for dementia risk stratification.Key PointsQuestionCan a deep learning algorithm based on fundus images estimate the CAIDE dementia risk score and identify individuals with high dementia risk?FindingsThe algorithm developed by fundus photographs from 258,305 check-up participants could well identify individuals with high dementia risk, with area under the receiver operating characteristic curve of 0.931 in internal validation and 0.782 in external validation dataset, respectively. Besides, the estimated CAIDE dementia risk score generated from the algorithm exhibited significant association with cognitive function.MeaningThe deep learning algorithm based on fundus photographs has potential to screen individuals with high dementia risk in population-based settings.


Sign in / Sign up

Export Citation Format

Share Document