scholarly journals Identifying diabetes from conjunctival images using a novel hierarchical multi-task network

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Xinyue Li ◽  
Chenjie Xia ◽  
Xin Li ◽  
Shuangqing Wei ◽  
Sujun Zhou ◽  
...  

AbstractDiabetes can cause microvessel impairment. However, these conjunctival pathological changes are not easily recognized, limiting their potential as independent diagnostic indicators. Therefore, we designed a deep learning model to explore the relationship between conjunctival features and diabetes, and to advance automated identification of diabetes through conjunctival images. Images were collected from patients with type 2 diabetes and healthy volunteers. A hierarchical multi-tasking network model (HMT-Net) was developed using conjunctival images, and the model was systematically evaluated and compared with other algorithms. The sensitivity, specificity, and accuracy of the HMT-Net model to identify diabetes were 78.70%, 69.08%, and 75.15%, respectively. The performance of the HMT-Net model was significantly better than that of ophthalmologists. The model allowed sensitive and rapid discrimination by assessment of conjunctival images and can be potentially useful for identifying diabetes.

2021 ◽  
Author(s):  
Jae-Seung Yun ◽  
Jaesik Kim ◽  
Sang-Hyuk Jung ◽  
Seon-Ah Cha ◽  
Seung-Hyun Ko ◽  
...  

Objective: We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. Research Design and Methods: The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. Results: When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. Conclusions: Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.


Diabetes ◽  
2019 ◽  
Vol 68 (Supplement 1) ◽  
pp. 309-OR
Author(s):  
AGATA WESOLOWSKA-ANDERSEN ◽  
MATTHIAS THURNER ◽  
ANUBHA MAHAJAN ◽  
FERNANDO ABAITUA ◽  
JASON TORRES ◽  
...  

Author(s):  
Jae-Seung Yun ◽  
Jaesik Kim ◽  
Sang-Hyuk Jung ◽  
Seon-Ah Cha ◽  
Seung-Hyun Ko ◽  
...  

2022 ◽  
Author(s):  
Marcus D.R. Klarqvist ◽  
Saaket Agrawal ◽  
Nathaniel Diamant ◽  
Patrick T. Ellinor ◽  
Anthony Philippakis ◽  
...  

Background: Inter-individual variation in fat distribution is increasingly recognized as clinically important but is not routinely assessed in clinical practice because quantification requires medical imaging. Objectives: We hypothesized that a deep learning model trained on an individual's body shape outline - or silhouette - would enable accurate estimation of specific fat depots, including visceral (VAT), abdominal subcutaneous (ASAT), and gluteofemoral (GFAT) adipose tissue volumes, and VAT/ASAT ratio. We additionally set out to study whether silhouette-estimated VAT/ASAT ratio may stratify risk of cardiometabolic diseases independent of body mass index (BMI) and waist circumference. Methods: Two-dimensional coronal and sagittal silhouettes were constructed from whole-body magnetic resonance images in 40,032 participants of the UK Biobank and used to train a convolutional neural network to predict VAT, ASAT, and GFAT volumes, and VAT/ASAT ratio. Logistic and Cox regressions were used to determine the independent association of silhouette-predicted VAT/ASAT ratio with type 2 diabetes and coronary artery disease. Results: Mean age of the study participants was 65 years and 51% were female. A deep learning model trained on silhouettes enabled accurate estimation of VAT, ASAT, and GFAT volumes (R2: 0.88, 0.93, and 0.93, respectively), outperforming a comparator model combining anthropometric and bioimpedance measures (ΔR2 = 0.05-0.13). Next, we studied VAT/ASAT ratio, a nearly BMI- and waist circumference-independent marker of unhealthy fat distribution. While the comparator model poorly predicted VAT/ASAT ratio (R2: 0.17-0.26), a silhouette-based model enabled significant improvement (R2: 0.50-0.55). Silhouette-predicted VAT/ASAT ratio was associated with increased prevalence of type 2 diabetes and coronary artery disease. Conclusions: Body silhouette images can estimate important measures of fat distribution, laying the scientific foundation for population-based assessment.


Sign in / Sign up

Export Citation Format

Share Document