A deep learning approach to automate high-resolution blood vessel reconstruction on computerised tomography images with or without the use of contrast agents

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A.C Chandrashekar ◽  
A.H Handa ◽  
N.S Shivakumar ◽  
P.L Lapolla ◽  
V.G Grau ◽  
...  

Abstract Background Existing methods to reconstruct vascular structures from a computed tomography (CT) angiogram rely on injection of intravenous contrast to enhance the radio-density within the vessel lumen. Pathological changes present within the blood lumen, vessel wall or a combination of both prevent accurate 3D reconstruction. In the example of aortic aneurysmal (AAA) disease, a blood clot or thrombus adherent to the aortic wall within the expanding aneurysmal sac is present in 95% of cases. These deformations prevent the automatic extraction of vital clinically relevant information by current methods. Objectives In this study, we utilised deep learning segmentation methods to establish a high-throughput and automated segmentation pipeline for pathological blood vessels (ex. Aortic Aneurysm) in CT images acquired with or without the use of a contrast agent. Methods Twenty-six patients with paired non-contrast and contrast-enhanced CT images were randomly selected from an ethically-approved ongoing study (Ethics Ref 13/SC/0250), manually annotated and used for model training and evaluation (13/13). Data augmentation methods were implemented to diversify the training data set in a ratio of 10:1. We utilised a 3D U-Net with attention gating for both the aortic region-of-interest (ROI) detection and segmentation tasks. Trained architectures were evaluated using the DICE similarity score. Results Inter- and Intra- observer analysis supports the accuracy of the manual segmentations used for model training (intra-class correlation coefficient, “ICC” = 0.995 and 1.00, respective. P<0.001 for both). The performance of our Attention-based U-Net (DICE score: 94.8±0.5%) in extracting both the inner lumen and the outer wall of the aortic aneurysm from CT angiograms (CTA) was compared against a generic 3-D U-Net (DICE score: 89.5±0.6%) and displayed superior results (p<0.01). Fig 1A depicts the implementation of this network architecture within the aortic segmentation pipeline (automated ROI detection and aortic segmentation). This pipeline has allowed accurate and efficient extraction of the entire aortic volume from both contrast-enhanced CTA (DICE score: 95.3±0.6%) and non-contrast CT (DICE score: 93.2±0.7%) images. Fig 1B illustrates the model output alongside the labelled ground truth segmentation for the pathological aneurysmal region; only minor differences are visually discernible (coloured boxes). Conclusion We developed a novel automated pipeline for high resolution reconstruction of blood vessels using deep learning approaches. This pipeline enables automatic extraction of morphologic features of blood vessels and can be applied for research and potentially for clinical use. Automated Segmentation of Blood Vessels Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): University of Oxford Medical Research Fund, John Fell Fund

Author(s):  
Yunchao Yin ◽  
Derya Yakar ◽  
Rudi A. J. O. Dierckx ◽  
Kim B. Mouridsen ◽  
Thomas C. Kwee ◽  
...  

Abstract Objectives Deep learning has been proven to be able to stage liver fibrosis based on contrast-enhanced CT images. However, until now, the algorithm is used as a black box and lacks transparency. This study aimed to provide a visual-based explanation of the diagnostic decisions made by deep learning. Methods The liver fibrosis staging network (LFS network) was developed at contrast-enhanced CT images in the portal venous phase in 252 patients with histologically proven liver fibrosis stage. To give a visual explanation of the diagnostic decisions made by the LFS network, Gradient-weighted Class Activation Mapping (Grad-cam) was used to produce location maps indicating where the LFS network focuses on when predicting liver fibrosis stage. Results The LFS network had areas under the receiver operating characteristic curve of 0.92, 0.89, and 0.88 for staging significant fibrosis (F2–F4), advanced fibrosis (F3–F4), and cirrhosis (F4), respectively, on the test set. The location maps indicated that the LFS network had more focus on the liver surface in patients without liver fibrosis (F0), while it focused more on the parenchyma of the liver and spleen in case of cirrhosis (F4). Conclusions Deep learning methods are able to exploit CT-based information from the liver surface, liver parenchyma, and extrahepatic information to predict liver fibrosis stage. Therefore, we suggest using the entire upper abdomen on CT images when developing deep learning–based liver fibrosis staging algorithms. Key Points • Deep learning algorithms can stage liver fibrosis using contrast-enhanced CT images, but the algorithm is still used as a black box and lacks transparency. • Location maps produced by Gradient-weighted Class Activation Mapping can indicate the focus of the liver fibrosis staging network. • Deep learning methods use CT-based information from the liver surface, liver parenchyma, and extrahepatic information to predict liver fibrosis stage.


2021 ◽  
Vol 11 ◽  
Author(s):  
He Sui ◽  
Ruhang Ma ◽  
Lin Liu ◽  
Yaozong Gao ◽  
Wenhai Zhang ◽  
...  

ObjectiveTo develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images.MethodsWe retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared.ResultsThe sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively.ConclusionsDeep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.


2020 ◽  
Vol 41 (6) ◽  
pp. 1061-1069
Author(s):  
L. Umapathy ◽  
B. Winegar ◽  
L. MacKinnon ◽  
M. Hill ◽  
M.I. Altbach ◽  
...  

2020 ◽  
Vol 75 (6) ◽  
pp. 481.e1-481.e8
Author(s):  
S. Agarwala ◽  
M. Kale ◽  
D. Kumar ◽  
R. Swaroop ◽  
A. Kumar ◽  
...  

2020 ◽  
Vol 191 ◽  
pp. 105387
Author(s):  
Floris Heutink ◽  
Valentin Koch ◽  
Berit Verbist ◽  
Willem Jan van der Woude ◽  
Emmanuel Mylanus ◽  
...  

2021 ◽  
Author(s):  
Evropi Toulkeridou ◽  
Carlos Enrique Gutierrez ◽  
Daniel Baum ◽  
Kenji Doya ◽  
Evan P Economo

Three-dimensional (3D) imaging, such as micro-computed tomography (micro-CT), is increasingly being used by organismal biologists for precise and comprehensive anatomical characterization. However, the segmentation of anatomical structures remains a bottleneck in research, often requiring tedious manual work. Here, we propose a pipeline for the fully-automated segmentation of anatomical structures in micro-CT images utilizing state-of-the-art deep learning methods, selecting the ant brain as a testcase. We implemented the U-Net architecture for 2D image segmentation for our convolutional neural network (CNN), combined with pixel-island detection. For training and validation of the network, we assembled a dataset of semi-manually segmented brain images of 94 ant species. The trained network predicted the brain area in ant images fast and accurately; its performance tested on validation sets showed good agreement between the prediction and the target, scoring 80% Intersection over Union(IoU) and 90% Dice Coefficient (F1) accuracy. While manual segmentation usually takes many hours for each brain, the trained network takes only a few minutes.Furthermore, our network is generalizable for segmenting the whole neural system in full-body scans, and works in tests on distantly related and morphologically divergent insects (e.g., fruit flies). The latter suggest that methods like the one presented here generally apply across diverse taxa. Our method makes the construction of segmented maps and the morphological quantification of different species more efficient and scalable to large datasets, a step toward a big data approach to organismal anatomy.


2021 ◽  
Author(s):  
Haesung Yoon ◽  
Jisoo Kim ◽  
Hyun Ji Lim ◽  
Mi-Jung Lee

Abstract Background Efforts to reduce the radiation dose have continued steadily, with new reconstruction techniques. Recently, image denoising algorithms using artificial neural networks, termed deep learning reconstruction (DLR), have been applied to CT image reconstruction to overcome the drawbacks of iterative reconstruction (IR). The purpose of our study was to compare objective and subjective image quality of DLR and IR on pediatric abdomen and chest CT images.Methods This retrospective study included pediatric body CT images from February 2020 to October 2020, performed on 51 patients (34 boys and 17 girls; age 1–18 years). Non-contrast chest CT (n = 16), contrast-enhanced chest CT (n = 12), and contrast-enhanced abdomen CT (n = 23) images were included. Standard 50% adaptive statistical iterative reconstruction V (ASIR-V) images were compared to images with 100% ASIR-V and DLR at medium and high strengths. Attenuation, noise, contrast to noise ratio (CNR), and signal to noise (SNR) measurements were performed. Overall image quality, artifacts, and noise were subjectively assessed by two radiologists using a four-point scale (superior, average, suboptimal, and unacceptable). Quantitative and qualitative parameters were compared using repeated measures analysis of variance (ANOVA) with Bonferroni correction and Wilcoxon signed-rank tests.Results DLR had better CNR and SNR than 50% ASIR-V in both pediatric chest and abdomen CT images. When compared with 50% ASIR-V, high strength DLR was associated with noise reduction in non-contrast chest CT (33.0%), contrast-enhanced chest CT (39.6%), and contrast-enhanced abdomen CT (38.7%) with increases in CNR at 149.1%, 105.8% and 53.1% respectively. The subjective assessment of overall image quality and noise was also better on DLR images (p < 0.001). However, there was no significant difference in artifacts between reconstruction methods.Conclusion Compared with 50% ASIR-V, DLR improved pediatric body CT images with significant noise reduction. However, artifacts were not improved by DLR, regardless of strength.


Sign in / Sign up

Export Citation Format

Share Document