scholarly journals A 3D deep learning approach to epicardial fat segmentation in non-contrast and post-contrast cardiac CT images

2021 ◽  
Vol 7 ◽  
pp. e806
Author(s):  
Thanongchai Siriapisith ◽  
Worapan Kusakunniran ◽  
Peter Haddawy

Epicardial fat (ECF) is localized fat surrounding the heart muscle or myocardium and enclosed by the thin-layer pericardium membrane. Segmenting the ECF is one of the most difficult medical image segmentation tasks. Since the epicardial fat is infiltrated into the groove between cardiac chambers and is contiguous with cardiac muscle, segmentation requires location and voxel intensity. Recently, deep learning methods have been effectively used to solve medical image segmentation problems in several domains with state-of-the-art performance. This paper presents a novel approach to 3D segmentation of ECF by integrating attention gates and deep supervision into the 3D U-Net deep learning architecture. The proposed method shows significant improvement of the segmentation performance, when compared with standard 3D U-Net. The experiments show excellent performance on non-contrast CT datasets with average Dice scores of 90.06%. Transfer learning from a pre-trained model of a non-contrast CT to contrast-enhanced CT dataset was also performed. The segmentation accuracy on the contrast-enhanced CT dataset achieved a Dice score of 88.16%.

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A Chandrashekar ◽  
N Shivakumar ◽  
P Lapolla ◽  
A Handa ◽  
V Grau ◽  
...  

Abstract Introduction Contrast-enhanced computerised tomographic (CT) angiograms are widely used in cardiovascular imaging to obtain a non-invasive view of arterial structures. In aortic aneurysmal disease (AAA), CT angiograms are required prior to surgical intervention to differentiate between blood and the intra-luminal thrombus, which is present in 95% of cases. However, contrast agents are associated with complications at the injection site as well as renal toxicity leading to contrast-induced nephropathy (CIN) and renal failure. Purpose We hypothesised that the raw data acquired from a non-contrast CT contains sufficient information to differentiate blood and other soft tissue components. Therefore, we utilised deep learning methods to define the subtleties between the various components of soft tissue in order to simulate contrast enhanced CT images without the need of contrast agents. Methods Twenty-six AAA patients with paired non-contrast and contrast-enhanced CT images were randomly selected from an ethically approved ongoing study (Ethics Ref 13/SC/0250) and used for model training and evaluation (13/13). Non-contrast axial slices within the aneurysmal region from 10 patients (n=100) were sampled for the underlying Hounsfield unit (HU) distribution at the lumen, intra-luminal thrombus and interface locations, identified from their paired contrast axial slices. Subsequently, paired axial slices within the training cohort were augmented in a ratio of 10:1 to produce a total of 23,551 2-D images. We trained a 2-D Cycle Generative Adversarial Network (cycleGAN) for this non-contrast to contrast transformation task. Model output was assessed by comparison to the contrast image, which serves as a gold standard, using image similarity metrics (ex. SSIM Index). Results Sampling HUs within the non-contrast CT scan across multiple axial slices (Figure 1A) revealed significant differences between the blood flow lumen (yellow), blood/thrombus interface (red), and thrombus (blue) regions (p<0.001 for all comparisons). This highlighted the intrinsic differences between the regions and established the foundation for subsequent deep learning methods. The Non-Contrast-to-Contrast (NC2C)-cycleGAN was trained with a learning rate of 0.0002 for 200 epochs on 256 x 256 images centred around the aorta. Figure 1B depicts “contrast-enhanced” images generated from non-contrast CT images across the aortic length from the testing cohort. This preliminary model is able to differentiate between the lumen and intra-luminal thrombus of aneurysmal sections with reasonable resemblance to the ground truth. Conclusion This study describes, for the first time, the ability to differentiate between visually incoherent soft tissue regions in non-contrast CT images using deep learning methods. Ultimately, refinement of this methodology may negate the use of intravenous contrast and prevent related complications. CTA Generation from Non-Contrast CTs Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): Clarendon


2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Ninlawan Thammasiri ◽  
Chutimon Thanaboonnipat ◽  
Nan Choisunirachon ◽  
Damri Darawiroj

Abstract Background It is difficult to examine mild to moderate feline intra-thoracic lymphadenopathy via and thoracic radiography. Despite previous information from computed tomographic (CT) images of intra-thoracic lymph nodes, some factors from animals and CT setting were less elucidated. Therefore, this study aimed to investigate the effect of internal factors from animals and external factors from the CT procedure on the feasibility to detect the intra-thoracic lymph nodes. Twenty-four, client-owned, clinically healthy cats were categorized into three groups according to age. They underwent pre- and post-contrast enhanced CT for whole thorax followed by inter-group evaluation and comparison of sternal, cranial mediastinal, and tracheobronchial lymph nodes. Results Post contrast-enhanced CT appearances revealed that intra-thoracic lymph nodes of kittens were invisible, whereas the sternal, cranial mediastinal, and tracheobronchial nodes of cats aged over 7 months old were detected (6/24, 9/24 and 7/24, respectively). Maximum width of these lymph nodes were 3.93 ± 0.74 mm, 4.02 ± 0.65 mm, and 3.51 ± 0.62 mm, respectively. By age, lymph node sizes of these cats were not significantly different. Transverse lymph node width of males was larger than that of females (P = 0.0425). Besides, the detection score of lymph nodes was affected by slice thickness (P < 0.01) and lymph node width (P = 0.0049). Furthermore, an irregular, soft tissue structure, possibly the thymus, was detected in all juvenile cats and three mature cats. Conclusions Despite additional information on intra-thoracic lymph nodes in CT images, which can be used to investigate lymphatic-related abnormalities, age, sex, and slice thickness of CT images must be also considered.


Author(s):  
Yunchao Yin ◽  
Derya Yakar ◽  
Rudi A. J. O. Dierckx ◽  
Kim B. Mouridsen ◽  
Thomas C. Kwee ◽  
...  

Abstract Objectives Deep learning has been proven to be able to stage liver fibrosis based on contrast-enhanced CT images. However, until now, the algorithm is used as a black box and lacks transparency. This study aimed to provide a visual-based explanation of the diagnostic decisions made by deep learning. Methods The liver fibrosis staging network (LFS network) was developed at contrast-enhanced CT images in the portal venous phase in 252 patients with histologically proven liver fibrosis stage. To give a visual explanation of the diagnostic decisions made by the LFS network, Gradient-weighted Class Activation Mapping (Grad-cam) was used to produce location maps indicating where the LFS network focuses on when predicting liver fibrosis stage. Results The LFS network had areas under the receiver operating characteristic curve of 0.92, 0.89, and 0.88 for staging significant fibrosis (F2–F4), advanced fibrosis (F3–F4), and cirrhosis (F4), respectively, on the test set. The location maps indicated that the LFS network had more focus on the liver surface in patients without liver fibrosis (F0), while it focused more on the parenchyma of the liver and spleen in case of cirrhosis (F4). Conclusions Deep learning methods are able to exploit CT-based information from the liver surface, liver parenchyma, and extrahepatic information to predict liver fibrosis stage. Therefore, we suggest using the entire upper abdomen on CT images when developing deep learning–based liver fibrosis staging algorithms. Key Points • Deep learning algorithms can stage liver fibrosis using contrast-enhanced CT images, but the algorithm is still used as a black box and lacks transparency. • Location maps produced by Gradient-weighted Class Activation Mapping can indicate the focus of the liver fibrosis staging network. • Deep learning methods use CT-based information from the liver surface, liver parenchyma, and extrahepatic information to predict liver fibrosis stage.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2107
Author(s):  
Xin Wei ◽  
Huan Wan ◽  
Fanghua Ye ◽  
Weidong Min

In recent years, medical image segmentation (MIS) has made a huge breakthrough due to the success of deep learning. However, the existing MIS algorithms still suffer from two types of uncertainties: (1) the uncertainty of the plausible segmentation hypotheses and (2) the uncertainty of segmentation performance. These two types of uncertainties affect the effectiveness of the MIS algorithm and then affect the reliability of medical diagnosis. Many studies have been done on the former but ignore the latter. Therefore, we proposed the hierarchical predictable segmentation network (HPS-Net), which consists of a new network structure, a new loss function, and a cooperative training mode. According to our knowledge, HPS-Net is the first network in the MIS area that can generate both the diverse segmentation hypotheses to avoid the uncertainty of the plausible segmentation hypotheses and the measure predictions about these hypotheses to avoid the uncertainty of segmentation performance. Extensive experiments were conducted on the LIDC-IDRI dataset and the ISIC2018 dataset. The results show that HPS-Net has the highest Dice score compared with the benchmark methods, which means it has the best segmentation performance. The results also confirmed that the proposed HPS-Net can effectively predict TNR and TPR.


2021 ◽  
pp. 161-174
Author(s):  
Pashupati Bhatt ◽  
Ashok Kumar Sahoo ◽  
Saumitra Chattopadhyay ◽  
Chandradeep Bhatt

2020 ◽  
Vol 214 (3) ◽  
pp. 605-612 ◽  
Author(s):  
Takashi Tanaka ◽  
Yong Huang ◽  
Yohei Marukawa ◽  
Yuka Tsuboi ◽  
Yoshihisa Masaoka ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document