Automated Pancreas Segmentation And Volumetry Using Deep Neural Network On Computed Tomography

Author(s):  
Sang-Heon Lim ◽  
Young Jae Kim ◽  
Yeon-Ho Park ◽  
Doojin Kim ◽  
Kwang Gi Kim ◽  
...  

Abstract Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1,006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the Cancer Imaging Archive (TCIA) pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.

2021 ◽  
Vol 3 ◽  
Author(s):  
Dan Luo ◽  
Wei Zeng ◽  
Jinlong Chen ◽  
Wei Tang

Deep learning has become an active research topic in the field of medical image analysis. In particular, for the automatic segmentation of stomatological images, great advances have been made in segmentation performance. In this paper, we systematically reviewed the recent literature on segmentation methods for stomatological images based on deep learning, and their clinical applications. We categorized them into different tasks and analyze their advantages and disadvantages. The main categories that we explored were the data sources, backbone network, and task formulation. We categorized data sources into panoramic radiography, dental X-rays, cone-beam computed tomography, multi-slice spiral computed tomography, and methods based on intraoral scan images. For the backbone network, we distinguished methods based on convolutional neural networks from those based on transformers. We divided task formulations into semantic segmentation tasks and instance segmentation tasks. Toward the end of the paper, we discussed the challenges and provide several directions for further research on the automatic segmentation of stomatological images.


2021 ◽  
pp. 028418512110589
Author(s):  
Peijun Li ◽  
Bao Feng ◽  
Yu Liu ◽  
Yehang Chen ◽  
Haoyang Zhou ◽  
...  

Background Deep learning (DL) has been used on medical images to grade, differentiate, and predict prognosis in many tumors. Purpose To explore the effect of computed tomography (CT)-based deep learning nomogram (DLN) for predicting cervical cancer lymph node metastasis (LNM) before surgery. Material and Methods In total, 418 patients with stage IB-IIB cervical cancer were retrospectively enrolled for model exploration (n = 296) and internal validation (n = 122); 62 patients from another independent institution were enrolled for external validation. A convolutional neural network (CNN) was used for DL features extracting from all lesions. The least absolute shrinkage and selection operator (Lasso) logistic regression was used to develop a deep learning signature (DLS). A DLN incorporating the DLS and clinical risk factors was proposed to predict LNM individually. The performance of the DLN was evaluated on internal and external validation cohorts. Results Stage, CT-reported pelvic lymph node status, and DLS were found to be independent predictors and could be used to construct the DLN. The combination showed a better performance than the clinical model and DLS. The proposed DLN had an area under the curve (AUC) of 0.925 in the training cohort, 0.771 in the internal validation cohort, and 0.790 in the external validation cohort. Decision curve analysis and stratification analysis suggested that the DLN has potential ability to generate a personalized probability of LNM in cervical cancer. Conclusion The proposed CT-based DLN could be used as a personalized non-invasive tool for preoperative prediction of LNM in cervical cancer, which could facilitate the choice of clinical treatment methods.


Author(s):  
Valeria Vendries ◽  
Tamas Ungi ◽  
Jordan Harry ◽  
Manuela Kunz ◽  
Jana Podlipská ◽  
...  

Abstract Purpose Osteophytes are common radiographic markers of osteoarthritis. However, they are not accurately depicted using conventional imaging, thus hampering surgical interventions that rely on pre-operative images. Studies have shown that ultrasound (US) is promising at detecting osteophytes and monitoring the progression of osteoarthritis. Furthermore, three-dimensional (3D) ultrasound reconstructions may offer a means to quantify osteophytes. The purpose of this study was to compare the accuracy of osteophyte depiction in the knee joint between 3D US and conventional computed tomography (CT). Methods Eleven human cadaveric knees were pre-screened for the presence of osteophytes. Three osteoarthritic knees were selected, and then, 3D US and CT images were obtained, segmented, and digitally reconstructed in 3D. After dissection, high-resolution structured light scanner (SLS) images of the joint surfaces were obtained. Surface matching and root mean square (RMS) error analyses of surface distances were performed to assess the accuracy of each modality in capturing osteophytes. The RMS errors were compared between 3D US, CT and SLS models. Results Average RMS error comparisons for 3D US versus SLS and CT versus SLS models were 0.87 mm ± 0.33 mm (average ± standard deviation) and 0.95 mm ± 0.32 mm, respectively. No statistical difference was found between 3D US and CT. Comparative observations of imaging modalities suggested that 3D US better depicted osteophytes with cartilage and fibrocartilage tissue characteristics compared to CT. Conclusion Using 3D US can improve the depiction of osteophytes with a cartilaginous portion compared to CT. It can also provide useful information about the presence and extent of osteophytes. Whilst algorithm improvements for automatic segmentation and registration of US are needed to provide a more robust investigation of osteophyte depiction accuracy, this investigation puts forward the potential application for 3D US in routine diagnostic evaluations and pre-operative planning of osteoarthritis.


2021 ◽  
Author(s):  
Jiyeon Ha ◽  
Taeyong Park ◽  
Hong-Kyu Kim ◽  
Youngbin Shin ◽  
Yousun Ko ◽  
...  

BACKGROUND As sarcopenia research has been gaining emphasis, the need for quantification of abdominal muscle on computed tomography (CT) is increasing. Thus, a fully automated system to select L3 slice and segment muscle in an end-to-end manner is demanding. OBJECTIVE We aimed to develop a deep learning model (DLM) to select the L3 slice with consideration of anatomic variations and to segment cross-sectional areas (CSAs) of abdominal muscle and fat. METHODS Our DLM, named L3SEG-net, was composed of a YOLOv3-based algorithm for selecting the L3 slice and a fully convolutional network (FCN)-based algorithm for segmentation. The YOLOv3-based algorithm was developed via supervised learning using a training dataset (n=922), and the FCN-based algorithm was transferred from prior work. Our L3SEG-net was validated with internal (n=496) and external validation (n=586) datasets. L3 slice selection accuracy was evaluated by the distance difference between ground truths and DLM-derived results. Technical success for L3 slice selection was defined when the distance difference was <10 mm. Overall segmentation accuracy was evaluated by CSA error. The influence of anatomic variations on DLM performance was evaluated. RESULTS In the internal and external validation datasets, the accuracy of automatic L3 slice selection was high, with mean distance differences of 3.7±8.4 mm and 4.1±8.3 mm, respectively, and with technical success rates of 93.1% and 92.3%, respectively. However, in the subgroup analysis of anatomic variations, the L3 slice selection accuracy decreased, with distance differences of 12.4±15.4 mm and 12.1±14.6 mm, respectively, and with technical success rates of 67.2% and 67.9%, respectively. The overall segmentation accuracy of abdominal muscle areas was excellent regardless of anatomic variation, with the CSA errors of 1.38–3.10 cm2. CONCLUSIONS A fully automatic system was developed for the selection of an exact axial CT slice at the L3 vertebral level and the segmentation of abdominal muscle areas.


2021 ◽  
Author(s):  
Jiyeon Ha ◽  
Taeyong Park ◽  
Hong-Kyu Kim ◽  
Youngbin Shin ◽  
Yousun Ko ◽  
...  

Abstract Background and aims: As sarcopenia research has been gaining emphasis, the need for quantification of abdominal muscle on computed tomography (CT) is increasing. Thus, a fully automated system to select L3 slice and segment muscle in an end-to-end manner is demanded. We aimed to develop a deep learning model (DLM) to select the L3 slice with consideration of anatomic variations and to segment cross-sectional areas (CSAs) of abdominal muscle and fat. Methods: Our DLM, named L3SEG-net, was composed of a YOLOv3-based algorithm for selecting the L3 slice and a fully convolutional network (FCN)-based algorithm for segmentation. The YOLOv3-based algorithm was developed via supervised learning using a training dataset (n=922), and the FCN-based algorithm was transferred from prior work. Our L3SEG-net was validated with internal (n=496) and external validation (n=586) datasets. L3 slice selection accuracy was evaluated by the distance difference between ground truths and DLM-derived results. Technical success for L3 slice selection was defined when the distance difference was <10 mm. Overall segmentation accuracy was evaluated by CSA error. The influence of anatomic variations on DLM performance was evaluated.Results: In the internal and external validation datasets, the accuracy of automatic L3 slice selection was high, with mean distance differences of 3.7±8.4 mm and 4.1±8.3 mm, respectively, and with technical success rates of 93.1% and 92.3%, respectively. However, in the subgroup analysis of anatomic variations, the L3 slice selection accuracy decreased, with distance differences of 12.4±15.4 mm and 12.1±14.6 mm, respectively, and with technical success rates of 67.2% and 67.9%, respectively. The overall segmentation accuracy of abdominal muscle areas was excellent regardless of anatomic variation, with the CSA errors of 1.38–3.10 cm2.Conclusions: A fully automatic system was developed for the selection of an exact axial CT slice at the L3 vertebral level and the segmentation of abdominal muscle areas.


2021 ◽  
Author(s):  
Wing Keung Cheung ◽  
Robert Bell ◽  
Arjun Nair ◽  
Leon Menezies ◽  
Riyaz Patel ◽  
...  

AbstractA fully automatic two-dimensional Unet model is proposed to segment aorta and coronary arteries in computed tomography images. Two models are trained to segment two regions of interest, (1) the aorta and the coronary arteries or (2) the coronary arteries alone. Our method achieves 91.20% and 88.80% dice similarity coefficient accuracy on regions of interest 1 and 2 respectively. Compared with a semi-automatic segmentation method, our model performs better when segmenting the coronary arteries alone. The performance of the proposed method is comparable to existing published two-dimensional or three-dimensional deep learning models. Furthermore, the algorithmic and graphical processing unit memory efficiencies are maintained such that the model can be deployed within hospital computer networks where graphical processing units are typically not available.


2021 ◽  
Vol 68 (2) ◽  
pp. 2451-2467
Author(s):  
Javaria Amin ◽  
Muhammad Sharif ◽  
Muhammad Almas Anjum ◽  
Yunyoung Nam ◽  
Seifedine Kadry ◽  
...  

2020 ◽  
Vol 127 (Suppl_1) ◽  
Author(s):  
Bryant M Baldwin ◽  
Shane Joseph ◽  
Xiaodong Zhong ◽  
Ranya Kakish ◽  
Cherie Revere ◽  
...  

This study investigated MRI and semantic segmentation-based deep-learning (SSDL) automation for left-ventricular chamber quantifications (LVCQ) and low longitudinal strain (LLS) determination, thus eliminating user-bias by providing an automated tool to detect cardiotoxicity (CT) in breast cancer patients treated with antineoplastic agents. Displacement Encoding with Stimulated Echoes-based (DENSE) myocardial images from 26 patients were analyzed with the tool’s Convolution Neural Network with underlying Resnet-50 architecture. Quantifications based on the SSDL tool’s output were for LV end-diastolic diameter (LVEDD), ejection fraction (LVEF), and mass (LVM) (see figure for phase sequence). LLS was analyzed with Radial Point Interpolation Method (RPIM) with DENSE phase-based displacements. LVCQs were validated by comparison to measurements obtained with an existing semi-automated vendor tool (VT) and strains by 2 independent users employing Bland-Altman analysis (BAA) and interclass correlation coefficients estimated with Cronbach’s Alpha (C-Alpha) index. F1 score for classification accuracy was 0.92. LVCQs determined by SSDL and VT were 4.6 ± 0.5 vs 4.6 ± 0.7 cm (C-Alpha = 0.93 and BAA = 0.5 ± 0.5 cm) for LVEDD, 58 ± 5 vs 58 ± 6 % (0.90, 1 ± 5%) for LVEF, 119 ± 17 vs 121 ± 14 g (0.93, 5 ± 8 g) for LV mass, while LLS was 14 ± 4 vs 14 ± 3 % (0.86, 0.2 ± 6%). Hence, equivalent LV dimensions, mass and strains measured by VT and DENSE imaging validate our unique automated analytic tool. Longitudinal strains in patients can then be analyzed without user bias to detect abnormalities for the indication of cardiotoxicity and the need for therapeutic intervention even if LVEF is not affected.


2019 ◽  
Vol 25 (6) ◽  
pp. 954-961 ◽  
Author(s):  
Diego Ardila ◽  
Atilla P. Kiraly ◽  
Sujeeth Bharadwaj ◽  
Bokyung Choi ◽  
Joshua J. Reicher ◽  
...  

Author(s):  
Vitoantonio Bevilacqua ◽  
Antonio Brunetti ◽  
Giacomo Donato Cascarano ◽  
Andrea Guerriero ◽  
Francesco Pesce ◽  
...  

Abstract Background The automatic segmentation of kidneys in medical images is not a trivial task when the subjects undergoing the medical examination are affected by Autosomal Dominant Polycystic Kidney Disease (ADPKD). Several works dealing with the segmentation of Computed Tomography images from pathological subjects were proposed, showing high invasiveness of the examination or requiring interaction by the user for performing the segmentation of the images. In this work, we propose a fully-automated approach for the segmentation of Magnetic Resonance images, both reducing the invasiveness of the acquisition device and not requiring any interaction by the users for the segmentation of the images. Methods Two different approaches are proposed based on Deep Learning architectures using Convolutional Neural Networks (CNN) for the semantic segmentation of images, without needing to extract any hand-crafted features. In details, the first approach performs the automatic segmentation of images without any procedure for pre-processing the input. Conversely, the second approach performs a two-steps classification strategy: a first CNN automatically detects Regions Of Interest (ROIs); a subsequent classifier performs the semantic segmentation on the ROIs previously extracted. Results Results show that even though the detection of ROIs shows an overall high number of false positives, the subsequent semantic segmentation on the extracted ROIs allows achieving high performance in terms of mean Accuracy. However, the segmentation of the entire images input to the network remains the most accurate and reliable approach showing better performance than the previous approach. Conclusion The obtained results show that both the investigated approaches are reliable for the semantic segmentation of polycystic kidneys since both the strategies reach an Accuracy higher than 85%. Also, both the investigated methodologies show performances comparable and consistent with other approaches found in literature working on images from different sources, reducing both the invasiveness of the analyses and the interaction needed by the users for performing the segmentation task.


Sign in / Sign up

Export Citation Format

Share Document