Deep Learning for Automatic Calcium Scoring in Population-Based Cardiovascular Screening

Author(s):  
Marleen Vonder ◽  
Sunyi Zheng ◽  
Monique D. Dorrius ◽  
Carlijn M. van der Aalst ◽  
Harry J. de Koning ◽  
...  
2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
M Vonder ◽  
S Zheng ◽  
M D Dorrius ◽  
C M Van Der Aalst ◽  
H J De Koning ◽  
...  

Abstract Background High volumes of standardized coronary artery calcium (CAC) scans are generated in screening that need to be scored accurately and efficiently to risk stratify individuals. Purpose To evaluate the performance of deep learning based software for automatic coronary calcium scoring in a screening setting. Methods Participants from the Robinsca trial that underwent low-dose ECG-triggered cardiac CT for calcium scoring were included. CAC was measured with fully automated deep learning prototype and compared to the original manual assessment of the Robinsca trial. Detection rate, positive Agatston score and risk categorization (0–99, 100–399, ≥400) were compared using McNemar test, ICC, and Cohen's kappa. False negative (FN), false positive (FP) rate and diagnostic accuracy were determined for preventive treatment initiation (cut-off ≥100 AU). Results In total, 997 participants were included between December 2015 and June 2016. Median age was 61.0 y (IQR: 11.0) and 54.4% was male. A high agreement for detection was found between deep learning based and manual scoring, κ=0.87 (95% CI 0.85–0.89). Median Agatston score was 58.4 (IQR: 12.3–200.2) and 61.2 (IQR: 13.9–212.9) for deep learning based and manual assessment respectively, ICC was 0.958 (95% CI 0.951–0.964). Reclassification rate was 2.0%, with a very high agreement with κ=0.960 (95% CI: 0.943–0.997), p<0.001. FN rate was 0.7% and FP rate was 0.1% and diagnostic accuracy was 99.2% for initiation of preventive treatment. Conclusion Deep learning based software for automatic CAC scoring can be used in a cardiovascular CT screening setting with high accuracy for risk categorization and initiation of preventive treatment. FUNDunding Acknowledgement Type of funding sources: Public grant(s) – EU funding. Main funding source(s): Robinsca trial was supported by advanced grant of European Research Council


2020 ◽  
Vol 13 (2) ◽  
pp. 524-526 ◽  
Author(s):  
Simon S. Martin ◽  
Marly van Assen ◽  
Saikiran Rapaka ◽  
H. Todd Hudson ◽  
Andreas M. Fischer ◽  
...  

Radiology ◽  
2021 ◽  
Author(s):  
Dan Mu ◽  
Junjie Bai ◽  
Wenping Chen ◽  
Hongming Yu ◽  
Jing Liang ◽  
...  

Author(s):  
Li Dong ◽  
Xin Yue Hu ◽  
Yan Ni Yan ◽  
Qi Zhang ◽  
Nan Zhou ◽  
...  

This study aimed to develop an automated computer-based algorithm to estimate axial length and subfoveal choroidal thickness (SFCT) based on color fundus photographs. In the population-based Beijing Eye Study 2011, we took fundus photographs and measured SFCT by optical coherence tomography (OCT) and axial length by optical low-coherence reflectometry. Using 6394 color fundus images taken from 3468 participants, we trained and evaluated a deep-learning-based algorithm for estimation of axial length and SFCT. The algorithm had a mean absolute error (MAE) for estimating axial length and SFCT of 0.56 mm [95% confidence interval (CI): 0.53,0.61] and 49.20 μm (95% CI: 45.83,52.54), respectively. Estimated values and measured data showed coefficients of determination of r2 = 0.59 (95% CI: 0.50,0.65) for axial length and r2 = 0.62 (95% CI: 0.57,0.67) for SFCT. Bland–Altman plots revealed a mean difference in axial length and SFCT of −0.16 mm (95% CI: −1.60,1.27 mm) and of −4.40 μm (95% CI, −131.8,122.9 μm), respectively. For the estimation of axial length, heat map analysis showed that signals predominantly from overall of the macular region, the foveal region, and the extrafoveal region were used in the eyes with an axial length of < 22 mm, 22–26 mm, and > 26 mm, respectively. For the estimation of SFCT, the convolutional neural network (CNN) used mostly the central part of the macular region, the fovea or perifovea, independently of the SFCT. Our study shows that deep-learning-based algorithms may be helpful in estimating axial length and SFCT based on conventional color fundus images. They may be a further step in the semiautomatic assessment of the eye.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
F Commandeur ◽  
M Goeller ◽  
A Razipour ◽  
S Cadet ◽  
M M Hell ◽  
...  

Abstract Background Epicardial adipose tissue (EAT), a metabolically active visceral fat depot surrounding the coronary arteries, has been shown to promote the development of atherosclerosis in underlying coronary vasculature. Purpose We evaluate the performance of deep learning (DL), a sub-group of machine learning algorithms, for robust and fully automated quantification of EAT on multi-center cardiac CT data. Methods In this study, 850 non-contrast calcium scoring CT scans, from multiple cohorts, scanners and protocols, with manual measurements of EAT from 3 different readers were considered. The DL method was based on a convolutional neural network trained to reproduce the expert measurement. DL global performance was first assessed using all the scans, and then compared to inter-observer variability on a subset of 141 scans. Finally, automated EAT progression was compared to manual measurement using baseline and follow-up serial scans available for 70 subjects. The proposed model was validated using 10-fold cross validation. Results Automated quantification was performed in 1.57±0.49 seconds compared to 15 minutes for manual measurement. DL provided high agreement with expert manual quantification for all scans (R=0.974, p<0.001) with no significant bias (0.53 cm3, p=0.13). EAT volume was higher in patients with hypertension (+18.02 cm3, p<0.001, N=442), with diabetes (+18.33 cm3, p<0.001, N=75) and with hypercholesterolemia (+7.33 cm3, p=0.039, N=508). Manual EAT volumes measured by two experienced readers on 141 scans were highly correlated (R=0.984, p<0.001) but presented a significant difference of 4.35 cm3 (p<0.001). On these 141 scans, DL quantifications were highly correlated to both experts' measurements (R=0.973, p<0.001; R=0.979, p<0.001) with significant and non-significant bias for readers 1 and 2 (5.19 cm3, p<0.001; 0.84 cm3, p=0.26), respectively. In 70 subjects, EAT progression quantified by DL correlated strongly with EAT progression measured by the expert reader (R=0.905, p<0.001) with no significant bias (0.64 cm3, p=0.43), and was related to increased non-calcified plaque burden quantified from coronary CT angiography (5.7% vs 1.8%, p=0.026). Automated vs. manual EAT volume Conclusion Deep learning allows rapid, robust and fully automated quantification of EAT from calcium scoring CT. It performs as an expert reader and can be implemented for routine cardiovascular risk assessment. Acknowledgement/Funding 1R01HL133616/01EX1012B/Adelson Medical Research Foundation


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii148-ii148
Author(s):  
Jie Hao ◽  
Jose Agraz ◽  
Caleb Grenko ◽  
Ji Won Park ◽  
Angela Viaene ◽  
...  

Abstract MOTIVATION Glioblastoma is the most common and aggressive adult brain tumor. Clinical histopathologic evaluation is essential for tumor classification, which according to the World Health Organization is associated with prognostic information. Accurate prediction of patient overall survival (OS) from clinical routine baseline histopathology whole slide images (WSI) using advanced computational methods, while considering variations in the staining process, could contribute to clinical decision-making and patient management optimization. METHODS We utilize The Cancer Genome Atlas glioblastoma (TCGA-GBM) collection, comprising multi-institutional hematoxylin and eosin (H&E) stained frozen top-section WSI, genomic, and clinical data from 121 subjects. Data are randomly split into training (80%), validation (10%), and testing (10%) sets, while proportionally keeping the ratio of censored patients. We propose a novel deep learning algorithm to identify survival-discriminative histopathological patterns in a WSI, through feature maps, and quantitatively integrate them with gene expression and clinical data to predict patient OS. The concordance index (C-index) is used to quantify the predictive OS performance. Variations in slide staining are assessed through a novel population-based stain normalization approach, informed of glioblastoma distinct histologic sub-regions and their appearance from 509 H&E stained slides with corresponding anatomical annotations from the Ivy Glioblastoma Atlas Project (IvyGAP). RESULTS C-index was equal to 0.797, 0.713, and 0.703 for the training, validation, and testing data, respectively, prior to stain normalization. Following normalization, staining variations in H&E and ‘E’ gained significant improvements in IvyGAP (pWilcoxon&lt; 0.01) and TCGA-GBM (pWilcoxon&lt; 0.0001) data, respectively. These improvements contributed to further optimizing the C-index to 0.871, 0.777, and 0.780 for the training, validation, and testing data, respectively. CONCLUSIONS Appropriate normalization and integrative deep learning yield accurate OS prediction of glioblastoma patients through H&E slides, generalizable in multi-institutional data, potentially contributing to patient stratification in clinical trials. Our computationally-identified survival-discriminative histopathological patterns can contribute in further understanding glioblastoma.


Author(s):  
Yi Liu ◽  
Nicolas Basty ◽  
Brandon Whitcher ◽  
Jimmy D Bell ◽  
Elena Sorokin ◽  
...  

AbstractCardiometabolic diseases are an increasing global health burden. While well established socioeconomic, environmental, behavioural, and genetic risk factors have been identified, our understanding of the drivers and mechanisms underlying these complex diseases remains incomplete. A better understanding is required to develop more effective therapeutic interventions. Magnetic resonance imaging (MRI) has been used to assess organ health in a number of studies, but large-scale population-based studies are still in their infancy. Using 38,683 abdominal MRI scans in the UK Biobank, we used deep learning to systematically quantify parameters from individual organs (liver, pancreas, spleen, kidneys, lungs and adipose depots), and demonstrate that image derived phenotypes (volume, fat and iron content) reflect organ health and disease. We show that these traits have a substantial heritable component (8%-44%), and identify 93 independent genome-wide significant associations, including 3 associations with liver fat and one with liver iron that have not previously been reported, and 73 in traits that have not previously been studied. Overall our work demonstrates the utility of deep learning to systematically quantify health parameters from high-throughput MRI across a range of organs and tissues of the abdomen, and to generate new insights into the genetic architecture of complex traits.


2018 ◽  
Vol 39 (suppl_1) ◽  
Author(s):  
A C P Diederichsen ◽  
L M Rasmussen ◽  
R Sogaard ◽  
J Lambrechtsen ◽  
F H Steffensen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document