scholarly journals Automatic Vertebral Body Segmentation using Semantic Segmentation

2019 ◽  
Vol 8 (4) ◽  
pp. 12163-12167

Segmentation of vertebral bodies (VB) is a preliminary and useful step for the diagnosis of spine pathologies, deformations and fractures caused due to various reasons. We present a method to address this challenging problem of VB segmentation using a trending method – Semantic Segmentation (SS). The objective of semantic segmentation of images usually consisting of three main components - convolutions, downsampling, and upsampling layers is to mark every pixel of an image with a correlating class of what is being described. In this study, we developed a unique automatic semantic segmentation architecture to segment the VB from Computed Tomography (CT) images, and we compared our segmentation results with reference segmentations obtained by the experts. We evaluated the proposed method on a publicly available dataset and achieved an average accuracy of 94.16% and an average Dice Similarity Coefficient (DSC) of 93.51% for VB segmentation.

Diagnostics ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 893
Author(s):  
Yazan Qiblawey ◽  
Anas Tahir ◽  
Muhammad E. H. Chowdhury ◽  
Amith Khandakar ◽  
Serkan Kiranyaz ◽  
...  

Detecting COVID-19 at an early stage is essential to reduce the mortality risk of the patients. In this study, a cascaded system is proposed to segment the lung, detect, localize, and quantify COVID-19 infections from computed tomography images. An extensive set of experiments were performed using Encoder–Decoder Convolutional Neural Networks (ED-CNNs), UNet, and Feature Pyramid Network (FPN), with different backbone (encoder) structures using the variants of DenseNet and ResNet. The conducted experiments for lung region segmentation showed a Dice Similarity Coefficient (DSC) of 97.19% and Intersection over Union (IoU) of 95.10% using U-Net model with the DenseNet 161 encoder. Furthermore, the proposed system achieved an elegant performance for COVID-19 infection segmentation with a DSC of 94.13% and IoU of 91.85% using the FPN with DenseNet201 encoder. The proposed system can reliably localize infections of various shapes and sizes, especially small infection regions, which are rarely considered in recent studies. Moreover, the proposed system achieved high COVID-19 detection performance with 99.64% sensitivity and 98.72% specificity. Finally, the system was able to discriminate between different severity levels of COVID-19 infection over a dataset of 1110 subjects with sensitivity values of 98.3%, 71.2%, 77.8%, and 100% for mild, moderate, severe, and critical, respectively.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246071
Author(s):  
Yen-Fen Ko ◽  
Kuo-Sheng Cheng

Electrical impedance tomography (EIT) is widely used for bedside monitoring of lung ventilation status. Its goal is to reflect the internal conductivity changes and estimate the electrical properties of the tissues in the thorax. However, poor spatial resolution affects EIT image reconstruction to the extent that the heart and lung-related impedance images are barely distinguishable. Several studies have attempted to tackle this problem, and approaches based on decomposition of EIT images using linear transformations have been developed, and recently, U-Net has become a prominent architecture for semantic segmentation. In this paper, we propose a novel semi-Siamese U-Net specifically tailored for EIT application. It is based on the state-of-the-art U-Net, whose structure is modified and extended, forming shared encoder with parallel decoders and has multi-task weighted losses added to adapt to the individual separation tasks. The trained semi-Siamese U-Net model was evaluated with a test dataset, and the results were compared with those of the classical U-Net in terms of Dice similarity coefficient and mean absolute error. Results showed that compared with the classical U-Net, semi-Siamese U-Net exhibited performance improvements of 11.37% and 3.2% in Dice similarity coefficient, and 3.16% and 5.54% in mean absolute error, in terms of heart and lung-impedance image separation, respectively.


2005 ◽  
Author(s):  
Aleksandra Popovic ◽  
Martin Engelhardt ◽  
Klaus Radermacher

Methods for segmentation of skull infiltrated tumors in Computed Tomography (CT) images using Insight Segmentation and Registration Toolkit ITK (www.itk.org) are presented. Pipelines of filters and algorithms from ITK are validated on the basis of different criteria: sensitivity, specificity, dice similarity coefficient, Chi-squared, and Hausdorff distance measure. The method to rate segmentation results in relation to validation metrics is presented together with analysis of importance of different goodness measures. Results for one simulated dataset and three patient are presented.


2020 ◽  
Vol 25 (1) ◽  
pp. 43-50
Author(s):  
Pavlo Radiuk

AbstractThe achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.


2018 ◽  
Vol 8 (9) ◽  
pp. 1586 ◽  
Author(s):  
Sewon Kim ◽  
Won Bae ◽  
Koichi Masuda ◽  
Christine Chung ◽  
Dosik Hwang

We propose a semi-automatic algorithm for the segmentation of vertebral bodies in magnetic resonance (MR) images of the human lumbar spine. Quantitative analysis of spine MR images often necessitate segmentation of the image into specific regions representing anatomic structures of interest. Existing algorithms for vertebral body segmentation require heavy inputs from the user, which is a disadvantage. For example, the user needs to define individual regions of interest (ROIs) for each vertebral body, and specify parameters for the segmentation algorithm. To overcome these drawbacks, we developed a semi-automatic algorithm that considerably reduces the need for user inputs. First, we simplified the ROI placement procedure by reducing the requirement to only one ROI, which includes a vertebral body; subsequently, a correlation algorithm is used to identify the remaining vertebral bodies and to automatically detect the ROIs. Second, the detected ROIs are adjusted to facilitate the subsequent segmentation process. Third, the segmentation is performed via graph-based and line-based segmentation algorithms. We tested our algorithm on sagittal MR images of the lumbar spine and achieved a 90% dice similarity coefficient, when compared with manual segmentation. Our new semi-automatic method significantly reduces the user’s role while achieving good segmentation accuracy.


2017 ◽  
Vol 16 (2) ◽  
pp. 246-252 ◽  
Author(s):  
Wenjuan Chen ◽  
Penggang Bai ◽  
Jianji Pan ◽  
Yuanji Xu ◽  
Kaiqiang Chen

Purpose: To assess changes in the volumes and spatial locations of tumors and surrounding organs by cone beam computed tomography during treatment for cervical cancer. Materials and Methods: Sixteen patients with cervical cancer had intensity-modulated radiotherapy and off-line cone beam computed tomography during chemotherapy and/or radiation therapy. The gross tumor volume (GTV-T) and clinical target volumes (CTVs) were contoured on the planning computed tomography and weekly cone beam computed tomography image, and changes in volumes and spatial locations were evaluated using the volume difference method and Dice similarity coefficients. Results: The GTV-T was 79.62 cm3 at prior treatment (0f) and then 20.86 cm3 at the end of external-beam chemoradiation. The clinical target volume changed slightly from 672.59 cm3 to 608.26 cm3, and the uterine volume (CTV-T) changed slightly from 83.72 cm3 to 80.23 cm3. There were significant differences in GTV-T and CTV-T among the different groups ( P < .001), but the clinical target volume was not significantly different in volume ( P > .05). The mean percent volume changes ranged from 23.05% to 70.85% for GTV-T, 4.71% to 6.78% for CTV-T, and 5.84% to 9.59% for clinical target volume, and the groups were significantly different ( P < .05). The Dice similarity coefficient of GTV-T decreased during the course of radiation therapy ( P < .001). In addition, there were significant differences in GTV-T among different groups ( P < .001), and changes in GTV-T correlated with the radiotherapy ( P < .001). There was a negative correlation between volume change rate (DV) and Dice similarity coefficient in the GTV-T and organs at risk ( r < 0; P < .05). Conclusion: The volume, volume change rate, and Dice similarity coefficient of GTV-T were all correlated with increase in radiation treatment. Significant variations in tumor regression and spatial location occurred during radiotherapy for cervical cancer. Adaptive radiotherapy approaches are needed to improve the treatment accuracy for cervical cancer.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Junyoung Park ◽  
Jae Sung Lee ◽  
Dongkyu Oh ◽  
Hyun Gee Ryoo ◽  
Jeong Hee Han ◽  
...  

AbstractQuantitative single-photon emission computed tomography/computed tomography (SPECT/CT) using Tc-99m pertechnetate aids in evaluating salivary gland function. However, gland segmentation and quantitation of gland uptake is challenging. We develop a salivary gland SPECT/CT with automated segmentation using a deep convolutional neural network (CNN). The protocol comprises SPECT/CT at 20 min, sialagogue stimulation, and SPECT at 40 min post-injection of Tc-99m pertechnetate (555 MBq). The 40-min SPECT was reconstructed using the 20-min CT after misregistration correction. Manual salivary gland segmentation for %injected dose (%ID) by human experts proved highly reproducible, but took 15 min per scan. An automatic salivary segmentation method was developed using a modified 3D U-Net for end-to-end learning from the human experts (n = 333). The automatic segmentation performed comparably with human experts in voxel-wise comparison (mean Dice similarity coefficient of 0.81 for parotid and 0.79 for submandibular, respectively) and gland %ID correlation (R2 = 0.93 parotid, R2 = 0.95 submandibular) with an operating time less than 1 min. The algorithm generated results that were comparable to the reference data. In conclusion, with the aid of a CNN, we developed a quantitative salivary gland SPECT/CT protocol feasible for clinical applications. The method saves analysis time and manual effort while reducing patients’ radiation exposure.


2021 ◽  
pp. 002203452110053
Author(s):  
H. Wang ◽  
J. Minnema ◽  
K.J. Batenburg ◽  
T. Forouzanfar ◽  
F.J. Hu ◽  
...  

Accurate segmentation of the jaw (i.e., mandible and maxilla) and the teeth in cone beam computed tomography (CBCT) scans is essential for orthodontic diagnosis and treatment planning. Although various (semi)automated methods have been proposed to segment the jaw or the teeth, there is still a lack of fully automated segmentation methods that can simultaneously segment both anatomic structures in CBCT scans (i.e., multiclass segmentation). In this study, we aimed to train and validate a mixed-scale dense (MS-D) convolutional neural network for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. Thirty CBCT scans were obtained from patients who had undergone orthodontic treatment. Gold standard segmentation labels were manually created by 4 dentists. As a benchmark, we also evaluated MS-D networks that segmented the jaw or the teeth (i.e., binary segmentation). All segmented CBCT scans were converted to virtual 3-dimensional (3D) models. The segmentation performance of all trained MS-D networks was assessed by the Dice similarity coefficient and surface deviation. The CBCT scans segmented by the MS-D network demonstrated a large overlap with the gold standard segmentations (Dice similarity coefficient: 0.934 ± 0.019, jaw; 0.945 ± 0.021, teeth). The MS-D network–based 3D models of the jaw and the teeth showed minor surface deviations when compared with the corresponding gold standard 3D models (0.390 ± 0.093 mm, jaw; 0.204 ± 0.061 mm, teeth). The MS-D network took approximately 25 s to segment 1 CBCT scan, whereas manual segmentation took about 5 h. This study showed that multiclass segmentation of jaw and teeth was accurate and its performance was comparable to binary segmentation. The MS-D network trained for multiclass segmentation would therefore make patient-specific orthodontic treatment more feasible by strongly reducing the time required to segment multiple anatomic structures in CBCT scans.


Author(s):  
Yisong He ◽  
Shengyuan Zhang ◽  
Yong Luo ◽  
Hang Yu ◽  
Yuchuan Fu ◽  
...  

Background: Manual segment target volumes were time-consuming and inter-observer variability couldn’t be avoided. With the development of computer science, auto-segmentation had the potential to solve this problem. Objective: To evaluate the accuracy and stability of Atlas-based and deep-learning-based auto-segmentation of the intermediate risk clinical target volume, composed of CTV2 and CTVnd, for nasopharyngeal carcinoma quantitatively. Methods and Materials: A cascade-deep-residual neural network was constructed to automatically segment CTV2 and CTVnd by deep learning method. Meanwhile, a commercially available software was used to automatically segment the same regions by Atlas-based method. The datasets included contrast computed tomography scans from 102 patients. For each patient, the two regions were manually delineated by one experienced physician. The similarity between the two auto-segmentation methods was quantitatively evaluated by Dice similarity coefficient, the 95th Hausdorff distance, volume overlap error and relative volume difference, respectively. Statistical analyses were performed using the ranked Wilcoxon test. Results: The average Dice similarity coefficient (±standard deviation) given by the deep-learning-based and Atlas-based auto-segmentation were 0.84(±0.03) and 0.74(±0.04) for CTV2, 0.79(±0.02) and 0.68(±0.03) for CTVnd, respectively. For the 95th Hausdorff distance, the corresponding values were 6.30±3.55mm and 9.34±3.39mm for CTV2, 7.09±2.27mm and 14.33±3.98mm for CTVnd. Besides, volume overlap error and relative volume difference could also predict the same situations. Statistical analyses showed significant difference between the two auto-segmentation methods (p<0.01). Conclusions: Compared with the Atlas-based segmentation approach, the deep-learning-based segmentation method performed better both in accuracy and stability for meaningful anatomical areas other than organs at risk.


2021 ◽  
Author(s):  
Sang-Heon Lim ◽  
Young Jae Kim ◽  
Yeon-Ho Park ◽  
Doojin Kim ◽  
Kwang Gi Kim ◽  
...  

Abstract Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1,006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the Cancer Imaging Archive (TCIA) pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.


Sign in / Sign up

Export Citation Format

Share Document