scholarly journals Deep segmentation of the liver and the hepatic tumors from abdomen tomography images

Author(s):  
Nermeen Elmenabawy ◽  
Mervat El-Seddek ◽  
Hossam El-Din Moustafa ◽  
Ahmed Elnakib

A pipelined framework is proposed for accurate, automated, simultaneous segmentation of the liver as well as the hepatic tumors from computed tomography (CT) images. The introduced framework composed of three pipelined levels. First, two different transfers deep convolutional neural networks (CNN) are applied to get high-level compact features of CT images. Second, a pixel-wise classifier is used to obtain two output-classified maps for each CNN model. Finally, a fusion neural network (FNN) is used to integrate the two maps. Experimentations performed on the MICCAI’2017 database of the liver tumor segmentation (LITS) challenge, result in a dice similarity coefficient (DSC) of 93.5% for the segmentation of the liver and of 74.40% for the segmentation of the lesion, using a 5-fold cross-validation scheme. Comparative results with the state-of-the-art techniques on the same data show the competing performance of the proposed framework for simultaneous liver and tumor segmentation.

2021 ◽  
Author(s):  
Maria Kawula ◽  
Dinu Purice ◽  
Minglun Li ◽  
Gerome Vivar ◽  
Seyed-Ahmad Ahmadi ◽  
...  

Abstract Background The evaluation of the automatic segmentation algorithms is commonly performed using geometric metrics, yet an evaluation based on dosimetric parameters might be more relevant in clinical practice but is still lacking in the literature. The aim of this study was to investigate the impact of state-of-the-art 3D U-Net-generated organ delineations on dose optimization in intensity-modulated radiation therapy (IMRT) for prostate patients for the first time. Methods A database of 69 computed tomography (CT) images with prostate, bladder, and rectum delineations was used for single-label 3D U-Net training with dice similarity coefficient (DSC)-based loss. Volumetric modulated arc therapy (VMAT) plans have been generated for both manual and automatic segmentations with the same optimization settings. These were chosen to give consistent plans when applying perturbations to the manual segmentations. Contours were evaluated in terms of DSC, average and 95% Hausdorff distance (HD). Dose distributions were evaluated with the manual segmentation as reference using dose volume histogram (DVH) parameters and a 3%/3mm gamma-criterion with 10% dose cut-off. A Pearson correlation coefficient between DSC and dosimetric metrics, gamma index and DVH parameters, has been calculated. Results 3D U-Net based segmentation achieved a DSC of 0.87(0.03) for prostate, 0.97(0.01) for bladder and 0.89(0.04) for rectum. The mean and 95% HD were below 1.6(0.4) and below 5(4) mm, respectively. The DVH parameters V 60/65/70 Gy for the bladder and V 50/65/70 Gy for the rectum showed agreement between dose distributions within ±5% and ±2%, respectively. The DVH parameters for prostate and prostate+3mm margin (surrogate clinical target volume) showed good target coverage for the 3D U-Net segmentation with the exception of one case. The average gamma pass-rate was 85\%. A comparison between geometric and dosimetric metrics showed no strong statistically significant correlation between these metrics. Conclusions The 3D U-Net developed for this work achieved state-of-the-art geometrical performance. The study highlighted the importance of dosimetric evaluation on top of standard geometric parameters and concluded that the automatic segmentation is sufficiently accurate to assist the physicians in manually contouring organs in CT images of the male pelvic region, which is an important step towards a fully automated workflow in IMRT.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Weiwei Wu ◽  
Shuicai Wu ◽  
Zhuhuang Zhou ◽  
Rui Zhang ◽  
Yanhua Zhang

Three-dimensional (3D) liver tumor segmentation from Computed Tomography (CT) images is a prerequisite for computer-aided diagnosis, treatment planning, and monitoring of liver cancer. Despite many years of research, 3D liver tumor segmentation remains a challenging task. In this paper, an efficient semiautomatic method was proposed for liver tumor segmentation in CT volumes based on improved fuzzy C-means (FCM) and graph cuts. With a single seed point, the tumor volume of interest (VOI) was extracted using confidence connected region growing algorithm to reduce computational cost. Then, initial foreground/background regions were labeled automatically, and a kernelized FCM with spatial information was incorporated in graph cuts segmentation to increase segmentation accuracy. The proposed method was evaluated on the public clinical dataset (3Dircadb), which included 15 CT volumes consisting of various sizes of liver tumors. We achieved an average volumetric overlap error (VOE) of 29.04% and Dice similarity coefficient (DICE) of 0.83, with an average processing time of 45 s per tumor. The experimental results showed that the proposed method was accurate for 3D liver tumor segmentation with a reduction of processing time.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Zhiqiang Tian ◽  
Jingyi Song ◽  
Chenyang Zhang ◽  
Xiaohui Tian ◽  
Zhong Shi ◽  
...  

Accurate segmentation ofs organs-at-risk (OARs) in computed tomography (CT) is the key to planning treatment in radiation therapy (RT). Manually delineating OARs over hundreds of images of a typical CT scan can be time-consuming and error-prone. Deep convolutional neural networks with specific structures like U-Net have been proven effective for medical image segmentation. In this work, we propose an end-to-end deep neural network for multiorgan segmentation with higher accuracy and lower complexity. Compared with several state-of-the-art methods, the proposed accuracy-complexity adjustment module (ACAM) can increase segmentation accuracy and reduce the model complexity and memory usage simultaneously. An attention-based multiscale aggregation module (MAM) is also proposed for further improvement. Experiment results on chest CT datasets show that the proposed network achieves competitive Dice similarity coefficient results with fewer float-point operations (FLOPs) for multiple organs, which outperforms several state-of-the-art methods.


2014 ◽  
Vol 898 ◽  
pp. 684-687
Author(s):  
Yun Tao Wei ◽  
Yi Bing Zhou

The segmentation of liver using computed tomography (CT) data has gained a lot of importance in the medical image processing field. In this paper, we present a survey on liver segmentation methods and techniques using CT images for liver segmentation. An adaptive initialization method was developed to produce fully automatic processing frameworks based on graph-cut and gradient flow active contour algorithms. This method was applied to abdominal Computed Tomography (CT) images for segmentation of liver tissue and hepatic tumors. Twenty-five anonymized datasets were randomly collected from several radiology centres without specific request on acquisition parameter settings nor patient clinical situation as inclusion criteria. Resulting automatic segmentations of liver tissue and tumors were compared to their reference standard delineations manually performed by a specialist. Segmentation accuracy has been assessed through the following evaluation framework: dice similarity coefficient, false negative ratio, false positive ratio and processing time. The implemented initialization method allows fully automatic segmentation leading to superior overall performances of graph-cut algorithm in terms of accuracy and processing time. The initialization method here presented resulted suitable and reliable for two different segmentation techniques and could be further extended.


2021 ◽  
Author(s):  
Tahereh Mahmoudi ◽  
Zahra Mousavi Kouzahkanan ◽  
Amir Reza Radmard ◽  
Raheleh Kafieh ◽  
Aneseh Salehnia ◽  
...  

Abstract Fully automated and volumetric segmentation of critical tumors may play a crucial role in diagnosis and surgical planning. One of the most challenging tumor segmentation tasks is localization of Pancreatic Ductal Adenocarcinoma (PDAC). Exclusive application of conventional methods does not appear promising. Deep learning approaches has achieved great success in the computer aided diagnosis, especially in biomedical image segmentation. This paper introduces a framework based on convolutional neural network (CNN) for segmentation of PDAC mass and surrounding vessels in CT images by incorporating powerful classic features, as well. First, a 3D-CNN architecture is used to localize the pancreas region from the whole CT volume using 3D Local Binary Pattern (LBP) map of the original image. Segmentation of PDAC mass is subsequently performed using 2D attention U-Net and Texture Attention U-Net (TAU-Net). TAU-Net is introduced by fusion of dense Scale-Invariant Feature Transform (SIFT) and LBP descriptors into the attention U-Net. An ensemble model is then used to cumulate the advantages of both networks using a 3D-CNN. In addition, to reduce the effects of imbalanced data, a new loss function is proposed as a weighted combination of three classic losses including Generalized Dice Loss (GDL), Weighted Pixel-Wise Cross Entropy loss (WPCE) and boundary loss. Due to insufficient sample size for vessel segmentation, we used the above-mentioned pre-trained networks and fin-tuned them. Experimental results show that the proposed method improves the Dice score for PDAC mass segmentation in portal-venous phase by 7.52% compared to state-of-the-art methods in term of DSC. Besides, three dimensional visualization of the tumor and surrounding vessels can facilitate the evaluation of PDAC treatment response.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sarahi Rosas-Gonzalez ◽  
Taibou Birgui-Sekou ◽  
Moncef Hidane ◽  
Ilyess Zemmoura ◽  
Clovis Tauber

Accurate brain tumor segmentation is crucial for clinical assessment, follow-up, and subsequent treatment of gliomas. While convolutional neural networks (CNN) have become state of the art in this task, most proposed models either use 2D architectures ignoring 3D contextual information or 3D models requiring large memory capacity and extensive learning databases. In this study, an ensemble of two kinds of U-Net-like models based on both 3D and 2.5D convolutions is proposed to segment multimodal magnetic resonance images (MRI). The 3D model uses concatenated data in a modified U-Net architecture. In contrast, the 2.5D model is based on a multi-input strategy to extract low-level features from each modality independently and on a new 2.5D Multi-View Inception block that aims to merge features from different views of a 3D image aggregating multi-scale features. The Asymmetric Ensemble of Asymmetric U-Net (AE AU-Net) based on both is designed to find a balance between increasing multi-scale and 3D contextual information extraction and keeping memory consumption low. Experiments on 2019 dataset show that our model improves enhancing tumor sub-region segmentation. Overall, performance is comparable with state-of-the-art results, although with less learning data or memory requirements. In addition, we provide voxel-wise and structure-wise uncertainties of the segmentation results, and we have established qualitative and quantitative relationships between uncertainty and prediction errors. Dice similarity coefficient for the whole tumor, tumor core, and tumor enhancing regions on BraTS 2019 validation dataset were 0.902, 0.815, and 0.773. We also applied our method in BraTS 2018 with corresponding Dice score values of 0.908, 0.838, and 0.800.


Author(s):  
Jianpeng Zhang ◽  
Yutong Xie ◽  
Pingping Zhang ◽  
Hao Chen ◽  
Yong Xia ◽  
...  

Automated segmentation of liver tumors in contrast-enhanced abdominal computed tomography (CT) scans is essential in assisting medical professionals to evaluate tumor development and make fast therapeutic schedule. Although deep convolutional neural networks (DCNNs) have contributed many breakthroughs in image segmentation, this task remains challenging, since 2D DCNNs are incapable of exploring the inter-slice information and 3D DCNNs are too complex to be trained with the available small dataset. In this paper, we propose the light-weight hybrid convolutional network (LW-HCN) to segment the liver and its tumors in CT volumes. Instead of combining a 2D and a 3D networks for coarse-to-fine segmentation, LW-HCN has a encoder-decoder structure, in which 2D convolutions used at the bottom of the encoder decreases the complexity and 3D convolutions used in other layers explore both spatial and temporal information. To further reduce the complexity, we design the depthwise and spatiotemporal separate (DSTS) factorization for 3D convolutions, which not only reduces parameters dramatically but also improves the performance. We evaluated the proposed LW-HCN model against several recent methods on the LiTS and 3D-IRCADb datasets and achieved, respectively, the Dice per case of 73.0% and 94.1% for tumor segmentation, setting a new state of the art.


Author(s):  
Qiangguo Jin ◽  
Zhaopeng Meng ◽  
Changming Sun ◽  
Hui Cui ◽  
Ran Su

Automatic extraction of liver and tumor from CT volumes is a challenging task due to their heterogeneous and diffusive shapes. Recently, 2D deep convolutional neural networks have become popular in medical image segmentation tasks because of the utilization of large labeled datasets to learn hierarchical features. However, few studies investigate 3D networks for liver tumor segmentation. In this paper, we propose a 3D hybrid residual attention-aware segmentation method, i.e., RA-UNet, to precisely extract the liver region and segment tumors from the liver. The proposed network has a basic architecture as U-Net which extracts contextual information combining low-level feature maps with high-level ones. Attention residual modules are integrated so that the attention-aware features change adaptively. This is the first work that an attention residual mechanism is used to segment tumors from 3D medical volumetric images. We evaluated our framework on the public MICCAI 2017 Liver Tumor Segmentation dataset and tested the generalization on the 3DIRCADb dataset. The experiments show that our architecture obtains competitive results.


Sign in / Sign up

Export Citation Format

Share Document