A Fully Automatic Deep Learning System for L3 Slice Selection and Body Composition Assessment on Abdominal Computed Tomography. (Preprint)

2021 ◽  
Author(s):  
Jiyeon Ha ◽  
Taeyong Park ◽  
Hong-Kyu Kim ◽  
Youngbin Shin ◽  
Yousun Ko ◽  
...  

BACKGROUND As sarcopenia research has been gaining emphasis, the need for quantification of abdominal muscle on computed tomography (CT) is increasing. Thus, a fully automated system to select L3 slice and segment muscle in an end-to-end manner is demanding. OBJECTIVE We aimed to develop a deep learning model (DLM) to select the L3 slice with consideration of anatomic variations and to segment cross-sectional areas (CSAs) of abdominal muscle and fat. METHODS Our DLM, named L3SEG-net, was composed of a YOLOv3-based algorithm for selecting the L3 slice and a fully convolutional network (FCN)-based algorithm for segmentation. The YOLOv3-based algorithm was developed via supervised learning using a training dataset (n=922), and the FCN-based algorithm was transferred from prior work. Our L3SEG-net was validated with internal (n=496) and external validation (n=586) datasets. L3 slice selection accuracy was evaluated by the distance difference between ground truths and DLM-derived results. Technical success for L3 slice selection was defined when the distance difference was <10 mm. Overall segmentation accuracy was evaluated by CSA error. The influence of anatomic variations on DLM performance was evaluated. RESULTS In the internal and external validation datasets, the accuracy of automatic L3 slice selection was high, with mean distance differences of 3.7±8.4 mm and 4.1±8.3 mm, respectively, and with technical success rates of 93.1% and 92.3%, respectively. However, in the subgroup analysis of anatomic variations, the L3 slice selection accuracy decreased, with distance differences of 12.4±15.4 mm and 12.1±14.6 mm, respectively, and with technical success rates of 67.2% and 67.9%, respectively. The overall segmentation accuracy of abdominal muscle areas was excellent regardless of anatomic variation, with the CSA errors of 1.38–3.10 cm2. CONCLUSIONS A fully automatic system was developed for the selection of an exact axial CT slice at the L3 vertebral level and the segmentation of abdominal muscle areas.

2021 ◽  
Author(s):  
Jiyeon Ha ◽  
Taeyong Park ◽  
Hong-Kyu Kim ◽  
Youngbin Shin ◽  
Yousun Ko ◽  
...  

Abstract Background and aims: As sarcopenia research has been gaining emphasis, the need for quantification of abdominal muscle on computed tomography (CT) is increasing. Thus, a fully automated system to select L3 slice and segment muscle in an end-to-end manner is demanded. We aimed to develop a deep learning model (DLM) to select the L3 slice with consideration of anatomic variations and to segment cross-sectional areas (CSAs) of abdominal muscle and fat. Methods: Our DLM, named L3SEG-net, was composed of a YOLOv3-based algorithm for selecting the L3 slice and a fully convolutional network (FCN)-based algorithm for segmentation. The YOLOv3-based algorithm was developed via supervised learning using a training dataset (n=922), and the FCN-based algorithm was transferred from prior work. Our L3SEG-net was validated with internal (n=496) and external validation (n=586) datasets. L3 slice selection accuracy was evaluated by the distance difference between ground truths and DLM-derived results. Technical success for L3 slice selection was defined when the distance difference was <10 mm. Overall segmentation accuracy was evaluated by CSA error. The influence of anatomic variations on DLM performance was evaluated.Results: In the internal and external validation datasets, the accuracy of automatic L3 slice selection was high, with mean distance differences of 3.7±8.4 mm and 4.1±8.3 mm, respectively, and with technical success rates of 93.1% and 92.3%, respectively. However, in the subgroup analysis of anatomic variations, the L3 slice selection accuracy decreased, with distance differences of 12.4±15.4 mm and 12.1±14.6 mm, respectively, and with technical success rates of 67.2% and 67.9%, respectively. The overall segmentation accuracy of abdominal muscle areas was excellent regardless of anatomic variation, with the CSA errors of 1.38–3.10 cm2.Conclusions: A fully automatic system was developed for the selection of an exact axial CT slice at the L3 vertebral level and the segmentation of abdominal muscle areas.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jiyeon Ha ◽  
Taeyong Park ◽  
Hong-Kyu Kim ◽  
Youngbin Shin ◽  
Yousun Ko ◽  
...  

AbstractAs sarcopenia research has been gaining emphasis, the need for quantification of abdominal muscle on computed tomography (CT) is increasing. Thus, a fully automated system to select L3 slice and segment muscle in an end-to-end manner is demanded. We aimed to develop a deep learning model (DLM) to select the L3 slice with consideration of anatomic variations and to segment cross-sectional areas (CSAs) of abdominal muscle and fat. Our DLM, named L3SEG-net, was composed of a YOLOv3-based algorithm for selecting the L3 slice and a fully convolutional network (FCN)-based algorithm for segmentation. The YOLOv3-based algorithm was developed via supervised learning using a training dataset (n = 922), and the FCN-based algorithm was transferred from prior work. Our L3SEG-net was validated with internal (n = 496) and external validation (n = 586) datasets. Ground truth L3 level CT slice and anatomic variation were identified by a board-certified radiologist. L3 slice selection accuracy was evaluated by the distance difference between ground truths and DLM-derived results. Technical success for L3 slice selection was defined when the distance difference was < 10 mm. Overall segmentation accuracy was evaluated by CSA error and DSC value. The influence of anatomic variations on DLM performance was evaluated. In the internal and external validation datasets, the accuracy of automatic L3 slice selection was high, with mean distance differences of 3.7 ± 8.4 mm and 4.1 ± 8.3 mm, respectively, and with technical success rates of 93.1% and 92.3%, respectively. However, in the subgroup analysis of anatomic variations, the L3 slice selection accuracy decreased, with distance differences of 12.4 ± 15.4 mm and 12.1 ± 14.6 mm, respectively, and with technical success rates of 67.2% and 67.9%, respectively. The overall segmentation accuracy of abdominal muscle areas was excellent regardless of anatomic variation, with CSA errors of 1.38–3.10 cm2. A fully automatic system was developed for the selection of an exact axial CT slice at the L3 vertebral level and the segmentation of abdominal muscle areas.


2021 ◽  
Author(s):  
Sang-Heon Lim ◽  
Young Jae Kim ◽  
Yeon-Ho Park ◽  
Doojin Kim ◽  
Kwang Gi Kim ◽  
...  

Abstract Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1,006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the Cancer Imaging Archive (TCIA) pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.


2021 ◽  
Author(s):  
Hoon Ko ◽  
Jimi Huh ◽  
Kyung Won Kim ◽  
Heewon Chung ◽  
Yousun Ko ◽  
...  

BACKGROUND Detection and quantification of intraabdominal free fluid (i.e., ascites) on computed tomography (CT) are essential processes to find emergent or urgent conditions in patients. In an emergent department, automatic detection and quantification of ascites will be beneficial. OBJECTIVE We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). METHODS 2D deep learning models (DLMs) based on a deep residual U-Net, U-Net, bi-directional U-Net, and recurrent residual U-net were developed to segment areas of ascites on an abdominopelvic CT. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and non-ascites images. The AI algorithms were trained using 6,337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1,635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. RESULTS The segmentation accuracy was the highest in the deep residual U-Net with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bi-directional U-Net, and recurrent residual U-net (mIoU values 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest in the deep residual U-net (0.96), followed by U-Net, bi-directional U-net, and recurrent residual U-net (0.90, 0.88, and 0.82, respectively). The deep residual U-net also achieved high sensitivity (0.96) and high specificity (0.96). CONCLUSIONS We propose the deep residual U-net-based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.


2020 ◽  
Vol 56 (2) ◽  
pp. 2000775 ◽  
Author(s):  
Shuo Wang ◽  
Yunfei Zha ◽  
Weimin Li ◽  
Qingxia Wu ◽  
Xiaohu Li ◽  
...  

Coronavirus disease 2019 (COVID-19) has spread globally, and medical resources become insufficient in many regions. Fast diagnosis of COVID-19 and finding high-risk patients with worse prognosis for early prevention and medical resource optimisation is important. Here, we proposed a fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis by routinely used computed tomography.We retrospectively collected 5372 patients with computed tomography images from seven cities or provinces. Firstly, 4106 patients with computed tomography images were used to pre-train the deep learning system, making it learn lung features. Following this, 1266 patients (924 with COVID-19 (471 had follow-up for >5 days) and 342 with other pneumonia) from six cities or provinces were enrolled to train and externally validate the performance of the deep learning system.In the four external validation sets, the deep learning system achieved good performance in identifying COVID-19 from other pneumonia (AUC 0.87 and 0.88, respectively) and viral pneumonia (AUC 0.86). Moreover, the deep learning system succeeded to stratify patients into high- and low-risk groups whose hospital-stay time had significant difference (p=0.013 and p=0.014, respectively). Without human assistance, the deep learning system automatically focused on abnormal areas that showed consistent characteristics with reported radiological findings.Deep learning provides a convenient tool for fast screening of COVID-19 and identifying potential high-risk patients, which may be helpful for medical resource optimisation and early prevention before patients show severe symptoms.


Author(s):  
Nina Montaña-Brown ◽  
João Ramalhinho ◽  
Moustafa Allam ◽  
Brian Davidson ◽  
Yipeng Hu ◽  
...  

Abstract Purpose: Registration of Laparoscopic Ultrasound (LUS) to a pre-operative scan such as Computed Tomography (CT) using blood vessel information has been proposed as a method to enable image-guidance for laparoscopic liver resection. Currently, there are solutions for this problem that can potentially enable clinical translation by bypassing the need for a manual initialisation and tracking information. However, no reliable framework for the segmentation of vessels in 2D untracked LUS images has been presented. Methods: We propose the use of 2D UNet for the segmentation of liver vessels in 2D LUS images. We integrate these results in a previously developed registration method, and show the feasibility of a fully automatic initialisation to the LUS to CT registration problem without a tracking device. Results: We validate our segmentation using LUS data from 6 patients. We test multiple models by placing patient datasets into different combinations of training, testing and hold-out, and obtain mean Dice scores ranging from 0.543 to 0.706. Using these segmentations, we obtain registration accuracies between 6.3 and 16.6 mm in 50% of cases. Conclusions: We demonstrate the first instance of deep learning (DL) for the segmentation of liver vessels in LUS. Our results show the feasibility of UNet in detecting multiple vessel instances in 2D LUS images, and potentially automating a LUS to CT registration pipeline.


Author(s):  
Shuo Wang ◽  
Yunfei Zha ◽  
Weimin Li ◽  
Qingxia Wu ◽  
Xiaohu Li ◽  
...  

AbstractCoronavirus disease 2019 (COVID-19) has spread globally, and medical resources become insufficient in many regions. Fast diagnosis of COVID-19, and finding high-risk patients with worse prognosis for early prevention and medical resources optimization is important. Here, we proposed a fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis by routinely used computed tomography.We retrospectively collected 5372 patients with computed tomography images from 7 cities or provinces. Firstly, 4106 patients with computed tomography images and gene information were used to pre-train the DL system, making it learn lung features. Afterwards, 1266 patients (924 with COVID-19, and 471 had follow-up for 5+ days; 342 with other pneumonia) from 6 cities or provinces were enrolled to train and externally validate the performance of the deep learning system.In the 4 external validation sets, the deep learning system achieved good performance in identifying COVID-19 from other pneumonia (AUC=0.87 and 0.88) and viral pneumonia (AUC=0.86). Moreover, the deep learning system succeeded to stratify patients into high-risk and low-risk groups whose hospital-stay time have significant difference (p=0.013 and 0.014). Without human-assistance, the deep learning system automatically focused on abnormal areas that showed consistent characteristics with reported radiological findings.Deep learning provides a convenient tool for fast screening COVID-19 and finding potential high-risk patients, which may be helpful for medical resource optimization and early prevention before patients show severe symptoms.Take-home messageFully automatic deep learning system provides a convenient method for COVID-19 diagnostic and prognostic analysis, which can help COVID-19 screening and finding potential high-risk patients with worse prognosis.


2021 ◽  
Author(s):  
Dong Chuang Guo ◽  
Jun Gu ◽  
Jian He ◽  
Hai Rui Chu ◽  
Na Dong ◽  
...  

Abstract Background: Hematoma expansion is an independent predictor of patient outcome and mortality. The early diagnosis of hematoma expansion is crucial for selecting clinical treatment options This study aims to explore the value of a deep learning algorithm for the prediction of hematoma expansion from noncontrast Computed tomography(NCCT) scan through external validation.Methods: 102 NCCT images of Hypertensive intracerebral hemorrhage (HICH) patients diagnosed in our hospital were retrospectively reviewed. The initial Computed tomography (CT) scan images were evaluated by a commercial Artificial intelligence (AI) software using deep learning algorithm and radiologists respectively to predict hematoma expansion and the corresponding sensitivity and specificity of the two groups were calculated and compared, Pair-wise comparisons were conducted among gold standard hematoma expansion diagnosis time, AI software diagnosis time and doctors’ reading time.Results: Among 102 HICH patients, The sensitivity, specificity and accuracy of predicting hematoma expansion in the AI group were higher than those in the doctor group(80.0% vs 66.7%,73.6% vs 58.3%,75.5% vs 60.8%),with statistically significant difference (p<0.05).The AI diagnosis time (2.8 ± 0.3s) and the doctors’ diagnosis time (11.7 ± 0.3s) were both significantly shorter than the gold standard diagnosis time (14.5 ± 8.8h) (p <0.05), AI diagnosis time was significantly shorter than that of doctors (p<0.05).Conclusions: Deep learning algorithm could effectively predict hematoma expansion at an early stage from the initial CT scan images of HICH patients after onset with high sensitivity and specificity and greatly shortened diagnosis time, which provides a new, accurate, easy-to-use and fast method for the early prediction of hematoma expansion.


Sign in / Sign up

Export Citation Format

Share Document