scholarly journals An Unsupervised Learning-Based Multi-Organ Registration Method for 3D Abdominal CT Images

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6254
Author(s):  
Shaodi Yang ◽  
Yuqian Zhao ◽  
Miao Liao ◽  
Fan Zhang

Medical image registration is an essential technique to achieve spatial consistency geometric positions of different medical images obtained from single- or multi-sensor, such as computed tomography (CT), magnetic resonance (MR), and ultrasound (US) images. In this paper, an improved unsupervised learning-based framework is proposed for multi-organ registration on 3D abdominal CT images. First, the explored coarse-to-fine recursive cascaded network (RCN) modules are embedded into a basic U-net framework to achieve more accurate multi-organ registration results from 3D abdominal CT images. Then, a topology-preserving loss is added in the total loss function to avoid a distortion of the predicted transformation field. Four public databases are selected to validate the registration performances of the proposed method. The experimental results show that the proposed method is superior to some existing traditional and deep learning-based methods and is promising to meet the real-time and high-precision clinical registration requirements of 3D abdominal CT images.

2021 ◽  
Vol 17 (5) ◽  
pp. 952-959
Author(s):  
Shao-Di Yang ◽  
Yu-Qian Zhao ◽  
Fan Zhang ◽  
Miao Liao ◽  
Zhen Yang ◽  
...  

Image registration technology is a key technology used in the process of nanomaterial imaging-aided diagnosis and targeted therapy effect monitoring for abdominal diseases. Recently, the deep-learning based methods have been increasingly used for large-scale medical image registration, because their iteration is much less than those of traditional ones. In this paper, a coarse-to-fine unsupervised learning-based three-dimensional (3D) abdominal CT image registration method is presented. Firstly, an affine transformation was used as an initial step to deal with large deformation between two images. Secondly, an unsupervised total loss function containing similarity, smoothness, and topology preservation measures was proposed to achieve better registration performances during convolutional neural network (CNN) training and testing. The experimental results demonstrated that the proposed method severally obtains the average MSE, PSNR, and SSIM values of 0.0055, 22.7950, and 0.8241, which outperformed some existing traditional and unsupervised learning-based methods. Moreover, our method can register 3D abdominal CT images with shortest time and is expected to become a real-time method for clinical application.


2020 ◽  
Vol 12 (7) ◽  
pp. 909-914
Author(s):  
Shao-Di Yang ◽  
Fan Zhang ◽  
Zhen Yang ◽  
Xiao-Yu Yang ◽  
Shu-Zhou Li

Registration is a technical support for the integration of nanomaterial imaging-aided diagnosis and treatment. In this paper, a coarse-to-fine three-dimensional (3D) multi-phase abdominal CT images registration method is proposed. Firstly, a linear model is used to coarsely register the paired multiphase images. Secondly, an intensity-based registration framework is proposed, which contains the data and spatial regularization terms and performs fine registration on the paired images obtained in the coarse registration step. The results illustrate that the proposed method is superior to some existing methods with the average MSE, PSNR, and SSIM values of 0.0082, 21.2695, and 0.8956, respectively. Therefore, the proposed method provides an efficient and robust framework for 3D multi-phase abdominal CT images registration.


IRBM ◽  
2020 ◽  
Author(s):  
R. Bhattacharjee ◽  
F. Heitz ◽  
V. Noblet ◽  
S. Sharma ◽  
N. Sharma

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Yafen Li ◽  
Wen Li ◽  
Jing Xiong ◽  
Jun Xia ◽  
Yaoqin Xie

Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.


2014 ◽  
Vol 14 (05) ◽  
pp. 1450073 ◽  
Author(s):  
AICHA BELGHERBI ◽  
ISMAHEN HADJIDJ ◽  
ABDELHAFID BESSAID

The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of kidneys from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of kidneys from CT images is usually a difficult task. This difficulty is the gray's level which is similar to the spine level. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the spine by applying morphological filters. This first step makes the extraction of interest regions easier. This step is fulfilled by using various transformations such as the geodesic reconstruction. In the second step, we apply the watershed algorithm controlled by marker for kidney segmentation. The validation of the developed algorithm is done using several images. Obtained results show the good performances of our proposed algorithm.


2021 ◽  
pp. 019459982110449
Author(s):  
Andy S. Ding ◽  
Alexander Lu ◽  
Zhaoshuo Li ◽  
Deepa Galaiya ◽  
Jeffrey H. Siewerdsen ◽  
...  

Objective This study investigates the accuracy of an automated method to rapidly segment relevant temporal bone anatomy from cone beam computed tomography (CT) images. Implementation of this segmentation pipeline has potential to improve surgical safety and decrease operative time by augmenting preoperative planning and interfacing with image-guided robotic surgical systems. Study Design Descriptive study of predicted segmentations. Setting Academic institution. Methods We have developed a computational pipeline based on the symmetric normalization registration method that predicts segmentations of anatomic structures in temporal bone CT scans using a labeled atlas. To evaluate accuracy, we created a data set by manually labeling relevant anatomic structures (eg, ossicles, labyrinth, facial nerve, external auditory canal, dura) for 16 deidentified high-resolution cone beam temporal bone CT images. Automated segmentations from this pipeline were compared against ground-truth manual segmentations by using modified Hausdorff distances and Dice scores. Runtimes were documented to determine the computational requirements of this method. Results Modified Hausdorff distances and Dice scores between predicted and ground-truth labels were as follows: malleus (0.100 ± 0.054 mm; Dice, 0.827 ± 0.068), incus (0.100 ± 0.033 mm; Dice, 0.837 ± 0.068), stapes (0.157 ± 0.048 mm; Dice, 0.358 ± 0.100), labyrinth (0.169 ± 0.100 mm; Dice, 0.838 ± 0.060), and facial nerve (0.522 ± 0.278 mm; Dice, 0.567 ± 0.130). A quad-core 16GB RAM workstation completed this segmentation pipeline in 10 minutes. Conclusions We demonstrated submillimeter accuracy for automated segmentation of temporal bone anatomy when compared against hand-segmented ground truth using our template registration pipeline. This method is not dependent on the training data volume that plagues many complex deep learning models. Favorable runtime and low computational requirements underscore this method’s translational potential.


2016 ◽  
Vol 41 (1) ◽  
pp. 70-75 ◽  
Author(s):  
Robert D. Kilgour ◽  
Katrina Cardiff ◽  
Leonard Rosenthall ◽  
Enriqueta Lucar ◽  
Barbara Trutschnigg ◽  
...  

Measurements of body composition using dual-energy X-ray absorptiometry (DXA) and single abdominal images from computed tomography (CT) in advanced cancer patients (ACP) have important diagnostic and prognostic value. The question arises as to whether CT scans can serve as surrogates for DXA in terms of whole-body fat-free mass (FFM), whole-body fat mass (FM), and appendicular skeletal muscle (ASM) mass. Predictive equations to estimate body composition for ACP from CT images have been proposed (Mourtzakis et al. 2008; Appl. Physiol. Nutr. Metabol. 33(5): 997–1006); however, these equations have yet to be validated in an independent cohort of ACP. Thus, this study evaluated the accuracy of these equations in estimating FFM, FM, and ASM mass using CT images at the level of the third lumbar vertebrae and compared these values with DXA measurements. FFM, FM, and ASM mass were estimated from the prediction equations proposed by Mourtzakis and colleagues (2008) using single abdominal CT images from 43 ACP and were compared with whole-body DXA scans using Spearman correlations and Bland–Altman analyses. Despite a moderate to high correlation between the actual (DXA) and predicted (CT) values for FM (rho = 0.93; p ≤ 0.001), FFM (rho = 0.78; p ≤ 0.001), and ASM mass (rho = 0.70; p ≤ 0.001), Bland–Altman analyses revealed large range-of-agreement differences between the 2 methods (29.39 kg for FFM, 15.47 kg for FM, and 3.99 kg for ASM mass). Based on the magnitude of these differences, we concluded that prediction equations using single abdominal CT images have poor accuracy, cannot be considered as surrogates for DXA, and may have limited clinical utility.


2021 ◽  
pp. 028418512110681
Author(s):  
Hong Dai ◽  
Yutao Wang ◽  
Randi Fu ◽  
Sijia Ye ◽  
Xiuchao He ◽  
...  

Background Measurement of bone mineral density (BMD) is the most important method to diagnose osteoporosis. However, current BMD measurement is always performed after a fracture has occurred. Purpose To explore whether a radiomic model based on abdominal computed tomography (CT) can predict the BMD of lumbar vertebrae. Material and Methods A total of 245 patients who underwent both dual-energy X-ray absorptiometry (DXA) and abdominal CT examination (training cohort, n = 196; validation cohort, n = 49) were included in our retrospective study. In total, 1218 image features were extracted from abdominal CT images for each patient. Combined with clinical information, three steps including least absolute shrinkage and selection operator (LASSO) regression were used to select key features. A two-tier stacking regression model with multi-algorithm fusion was used for BMD prediction, which can integrate the advantages of linear model and non-linear model. The prediction results of this model were compared with those using a single regressor. The degree-of-freedom adjusted coefficient of determination (Adjusted-R2), root mean square error (RMSE), and mean absolute error (MAE) were used to evaluate the regression performance. Results Compared with other regression methods, the two-tier stacking regression model has a higher regression performance, with Adjusted-R2, RMSE, and MAE of 0.830, 0.077, and 0.06, respectively. Pearson correlation analysis and Bland–Altman analysis showed that the BMD predicted by the model had a high correlation with the DXA results (r = 0.932, difference = −0.01 ± 0.1412 mg/cm2). Conclusion Using radiomics, the BMD of lumbar vertebrae could be predicted from abdominal CT images.


2021 ◽  
Vol 8 (3) ◽  
pp. 207-215
Author(s):  
Seong Geun Lee ◽  
Hanjin Cho ◽  
Joo Yeong Kim ◽  
Juhyun Song ◽  
Jong-Hak Park

Objective Accurate interpretation of computed tomography (CT) scans is critical for patient care in the emergency department. We aimed to identify factors associated with an incorrect interpretation of abdominal CT by novice emergency residents and to analyze the characteristics of incorrectly interpreted scans.Methods This retrospective analysis of a prospective observational cohort was conducted at three urban emergency departments. Discrepancies between the interpretations by postgraduate year-1 (PGY-1) emergency residents and the final radiologists’ reports were assessed by independent adjudicators. Potential factors associated with incorrect interpretation included patient age, sex, time of interpretation, and organ category. Adjusted odds ratios (aORs) for incorrect interpretation were calculated using multivariable logistic regression analysis.Results Among 1,628 eligible cases, 270 (16.6%) were incorrect. The urinary system was the most correctly interpreted organ system (95.8%, 365/381), while the biliary tract was the most incorrectly interpreted (28.4%, 48/169). Normal CT images showed high false-positive rates of incorrect interpretation (28.2%, 96/340). Organ category was found to be a major determinant of incorrect interpretation. Using the urinary system as a reference, the aOR for incorrect interpretation of biliary tract disease was 9.20 (95% confidence interval, 5.0–16.90) and the aOR for incorrectly interpreting normal CT images was 8.47 (95% confidence interval, 4.85–14.78).Conclusion Biliary tract disease is a major factor associated with incorrect preliminary interpretations of abdominal CT scans by PGY-1 emergency residents. PGY-1 residents also showed high false-positive interpretation rates for normal CT images. Emergency residents’ training should focus on these two areas to improve abdominal CT interpretation accuracy.


2021 ◽  
Vol 10 (8) ◽  
pp. 525
Author(s):  
Wenmin Yao ◽  
Tong Chu ◽  
Wenlong Tang ◽  
Jingyu Wang ◽  
Xin Cao ◽  
...  

As one of China′s most precious cultural relics, the excavation and protection of the Terracotta Warriors pose significant challenges to archaeologists. A fairly common situation in the excavation is that the Terracotta Warriors are mostly found in the form of fragments, and manual reassembly among numerous fragments is laborious and time-consuming. This work presents a fracture-surface-based reassembling method, which is composed of SiamesePointNet, principal component analysis (PCA), and deep closest point (DCP), and is named SPPD. Firstly, SiamesePointNet is proposed to determine whether a pair of point clouds of 3D Terracotta Warrior fragments can be reassembled. Then, a coarse-to-fine registration method based on PCA and DCP is proposed to register the two fragments into a reassembled one. The above two steps iterate until the termination condition is met. A series of experiments on real-world examples are conducted, and the results demonstrate that the proposed method performs better than the conventional reassembling methods. We hope this work can provide a valuable tool for the virtual restoration of three-dimension cultural heritage artifacts.


Sign in / Sign up

Export Citation Format

Share Document