A deep Residual U-Net algorithm for automatic detection and quantification of ascites on the abdominopelvic computed tomography acquired in the emergency department (Preprint)

2021 ◽  
Author(s):  
Hoon Ko ◽  
Jimi Huh ◽  
Kyung Won Kim ◽  
Heewon Chung ◽  
Yousun Ko ◽  
...  

BACKGROUND Detection and quantification of intraabdominal free fluid (i.e., ascites) on computed tomography (CT) are essential processes to find emergent or urgent conditions in patients. In an emergent department, automatic detection and quantification of ascites will be beneficial. OBJECTIVE We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). METHODS 2D deep learning models (DLMs) based on a deep residual U-Net, U-Net, bi-directional U-Net, and recurrent residual U-net were developed to segment areas of ascites on an abdominopelvic CT. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and non-ascites images. The AI algorithms were trained using 6,337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1,635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. RESULTS The segmentation accuracy was the highest in the deep residual U-Net with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bi-directional U-Net, and recurrent residual U-net (mIoU values 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest in the deep residual U-net (0.96), followed by U-Net, bi-directional U-net, and recurrent residual U-net (0.90, 0.88, and 0.82, respectively). The deep residual U-net also achieved high sensitivity (0.96) and high specificity (0.96). CONCLUSIONS We propose the deep residual U-net-based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 137 ◽  
pp. 109582
Author(s):  
Suyon Chang ◽  
Hwiyoung Kim ◽  
Young Joo Suh ◽  
Dong Min Choi ◽  
Hyunghu Kim ◽  
...  

2021 ◽  
Author(s):  
Jiyeon Ha ◽  
Taeyong Park ◽  
Hong-Kyu Kim ◽  
Youngbin Shin ◽  
Yousun Ko ◽  
...  

BACKGROUND As sarcopenia research has been gaining emphasis, the need for quantification of abdominal muscle on computed tomography (CT) is increasing. Thus, a fully automated system to select L3 slice and segment muscle in an end-to-end manner is demanding. OBJECTIVE We aimed to develop a deep learning model (DLM) to select the L3 slice with consideration of anatomic variations and to segment cross-sectional areas (CSAs) of abdominal muscle and fat. METHODS Our DLM, named L3SEG-net, was composed of a YOLOv3-based algorithm for selecting the L3 slice and a fully convolutional network (FCN)-based algorithm for segmentation. The YOLOv3-based algorithm was developed via supervised learning using a training dataset (n=922), and the FCN-based algorithm was transferred from prior work. Our L3SEG-net was validated with internal (n=496) and external validation (n=586) datasets. L3 slice selection accuracy was evaluated by the distance difference between ground truths and DLM-derived results. Technical success for L3 slice selection was defined when the distance difference was <10 mm. Overall segmentation accuracy was evaluated by CSA error. The influence of anatomic variations on DLM performance was evaluated. RESULTS In the internal and external validation datasets, the accuracy of automatic L3 slice selection was high, with mean distance differences of 3.7±8.4 mm and 4.1±8.3 mm, respectively, and with technical success rates of 93.1% and 92.3%, respectively. However, in the subgroup analysis of anatomic variations, the L3 slice selection accuracy decreased, with distance differences of 12.4±15.4 mm and 12.1±14.6 mm, respectively, and with technical success rates of 67.2% and 67.9%, respectively. The overall segmentation accuracy of abdominal muscle areas was excellent regardless of anatomic variation, with the CSA errors of 1.38–3.10 cm2. CONCLUSIONS A fully automatic system was developed for the selection of an exact axial CT slice at the L3 vertebral level and the segmentation of abdominal muscle areas.


2021 ◽  
Author(s):  
Jiyeon Ha ◽  
Taeyong Park ◽  
Hong-Kyu Kim ◽  
Youngbin Shin ◽  
Yousun Ko ◽  
...  

Abstract Background and aims: As sarcopenia research has been gaining emphasis, the need for quantification of abdominal muscle on computed tomography (CT) is increasing. Thus, a fully automated system to select L3 slice and segment muscle in an end-to-end manner is demanded. We aimed to develop a deep learning model (DLM) to select the L3 slice with consideration of anatomic variations and to segment cross-sectional areas (CSAs) of abdominal muscle and fat. Methods: Our DLM, named L3SEG-net, was composed of a YOLOv3-based algorithm for selecting the L3 slice and a fully convolutional network (FCN)-based algorithm for segmentation. The YOLOv3-based algorithm was developed via supervised learning using a training dataset (n=922), and the FCN-based algorithm was transferred from prior work. Our L3SEG-net was validated with internal (n=496) and external validation (n=586) datasets. L3 slice selection accuracy was evaluated by the distance difference between ground truths and DLM-derived results. Technical success for L3 slice selection was defined when the distance difference was <10 mm. Overall segmentation accuracy was evaluated by CSA error. The influence of anatomic variations on DLM performance was evaluated.Results: In the internal and external validation datasets, the accuracy of automatic L3 slice selection was high, with mean distance differences of 3.7±8.4 mm and 4.1±8.3 mm, respectively, and with technical success rates of 93.1% and 92.3%, respectively. However, in the subgroup analysis of anatomic variations, the L3 slice selection accuracy decreased, with distance differences of 12.4±15.4 mm and 12.1±14.6 mm, respectively, and with technical success rates of 67.2% and 67.9%, respectively. The overall segmentation accuracy of abdominal muscle areas was excellent regardless of anatomic variation, with the CSA errors of 1.38–3.10 cm2.Conclusions: A fully automatic system was developed for the selection of an exact axial CT slice at the L3 vertebral level and the segmentation of abdominal muscle areas.


2018 ◽  
Vol 69 (5) ◽  
pp. 739-747 ◽  
Author(s):  
Eui Jin Hwang ◽  
Sunggyun Park ◽  
Kwang-Nam Jin ◽  
Jung Im Kim ◽  
So Young Choi ◽  
...  

Abstract Background Detection of active pulmonary tuberculosis on chest radiographs (CRs) is critical for the diagnosis and screening of tuberculosis. An automated system may help streamline the tuberculosis screening process and improve diagnostic performance. Methods We developed a deep learning–based automatic detection (DLAD) algorithm using 54c221 normal CRs and 6768 CRs with active pulmonary tuberculosis that were labeled and annotated by 13 board-certified radiologists. The performance of DLAD was validated using 6 external multicenter, multinational datasets. To compare the performances of DLAD with physicians, an observer performance test was conducted by 15 physicians including nonradiology physicians, board-certified radiologists, and thoracic radiologists. Image-wise classification and lesion-wise localization performances were measured using area under the receiver operating characteristic (ROC) curves and area under the alternative free-response ROC curves, respectively. Sensitivities and specificities of DLAD were calculated using 2 cutoffs (high sensitivity [98%] and high specificity [98%]) obtained through in-house validation. Results DLAD demonstrated classification performance of 0.977–1.000 and localization performance of 0.973–1.000. Sensitivities and specificities for classification were 94.3%–100% and 91.1%–100% using the high-sensitivity cutoff and 84.1%–99.0% and 99.1%–100% using the high-specificity cutoff. DLAD showed significantly higher performance in both classification (0.993 vs 0.746–0.971) and localization (0.993 vs 0.664–0.925) compared to all groups of physicians. Conclusions Our DLAD demonstrated excellent and consistent performance in the detection of active pulmonary tuberculosis on CR, outperforming physicians, including thoracic radiologists.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Jinzhou Wang ◽  
Xiangjun Shi ◽  
Xingchen Yao ◽  
Jie Ren ◽  
Xinru Du

Imaging examination plays an important role in the early diagnosis of myeloma. The study focused on the segmentation effects of deep learning-based models on CT images for myeloma, and the influence of different chemotherapy treatments on the prognosis of patients. Specifically, 186 patients with suspected myeloma were the research subjects. The U-Net model was adjusted to segment the CT images, and then, the Faster region convolutional neural network (RCNN) model was used to label the lesions. Patients were divided into bortezomib group (group 1, n = 128) and non-bortezomib group (group 2, n = 58). The biochemical indexes, blood routine indexes, and skeletal muscle of the two groups were compared before and after chemotherapy. The results showed that the improved U-Net model demonstrated good segmentation results, the Faster RCNN model can realize the labeling of the lesion area in the CT image, and the classification accuracy rate was as high as 99%. Compared with group 1, group 2 showed enlarged psoas major and erector spinae muscle after treatment and decreased bone marrow plasma cells content, blood M protein, urine 24 h light chain, pBNP, ß-2 microglobulin (β2MG), ALP, and white blood cell (WBC) levels ( P < 0.05 ). In conclusion, deep learning is suggested in the segmentation and classification of CT images for myeloma, which can lift the detection accuracy. Two different chemotherapy regimens both improve the prognosis of patients, but the effects of non-bortezomib chemotherapy are better.


2022 ◽  
Author(s):  
Vijay Kumar Gugulothu ◽  
Savadam Balaji

Abstract Detection of malignant lung nodules at an early stage may allow for clinical interventions that increase the survival rate of lung cancer patients. The use of hybrid deep learning techniques to detect nodules will improve the sensitivity of lung cancer screening and the interpretation speed of lung scans.Accurate detection of lung nodes is an important step in computed tomography (CT) imaging to detect lung cancer. However, it is very difficult to identify strong nodes due to the diversity of lung nodes and the complexity of the surrounding environment.Here, we proposed alung nodule detection and classification with CT images based on hybrid deep learning (LNDC-HDL) techniques. First, we introduce achaotic bird swarm optimization (CBSO) algorithm for lung nodule segmentation using statistical information. Second, we illustrate anImproved Fish Bee (IFB) algorithm for feature extraction and selection process. Third, we develop hybrid classifier i.e. hybrid differential evolution based neural network (HDE-NN) for tumor prediction and classification.Experimental results have shown that the use of computed tomography, which demonstrates the efficiency and importance of the HDE-NN specific structure for detecting lung nodes on CT scans, increases sensitivity and reduces the number of false positives. The proposed method shows that the benefits of HDE-NN node detection can be reaped by combining clinical practice.


Sign in / Sign up

Export Citation Format

Share Document