scholarly journals SAUNet++: An automatic segmentation model of COVID-19 lesion from CT slices

Author(s):  
Hanguang XIAO ◽  
Zhiqiang RAN ◽  
Shingo MABU ◽  
Banglin ZHANG ◽  
Bolong ZHANG ◽  
...  

Abstract The coronavirus disease 2019 (COVID-19) epidemic has spread worldwide and the healthcare system is in crisis. Accurate, automated and rapid segmentation of COVID-19 lesion in computed tomography (CT) images can help doctors diagnose and provide prognostic information. However, the variety of lesions and small regions of early lesion complicate their segmentation. To solve these problems, we propose a new SAUNet++ model with squeeze excitation residual (SER) module and atrous spatial pyramid pooling (ASPP) module. The SER module can assign more weights to more important channels and mitigate the problem of gradient disappearance, the ASPP module can obtain context information by atrous convolution using various sampling rates. In addition, the generalized dice loss (GDL) can reduce the correlation between lesion size and dice loss, and is introduced to solve the problem of small regions segmentation. We collected multinational CT scan data from China, Italy and Russia and conducted extensive experiments. In the experiments, SAUNet++ and GDL were compared to advanced segmentation models and popular loss functions, respectively. The experimental results demonstrated that our methods can effectively improve the accuracy of COVID-19 lesion segmentation on the dice similarity coefficient (our: 87.38% VS U-Net++: 86.08%), sensitivity (our: 93.28% VS U-Net++: 89.85%) and hausdorff distance (our: 19.99mm VS U-Net++: 27.69mm), respectively.

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 245
Author(s):  
Seok Oh ◽  
Young-Jae Kim ◽  
Young-Taek Park ◽  
Kwang-Gi Kim

The automatic segmentation of the pancreatic cyst lesion (PCL) is essential for the automated diagnosis of pancreatic cyst lesions on endoscopic ultrasonography (EUS) images. In this study, we proposed a deep-learning approach for PCL segmentation on EUS images. We employed the Attention U-Net model for automatic PCL segmentation. The Attention U-Net was compared with the Basic U-Net, Residual U-Net, and U-Net++ models. The Attention U-Net showed a better dice similarity coefficient (DSC) and intersection over union (IoU) scores than the other models on the internal test. Although the Basic U-Net showed a higher DSC and IoU scores on the external test than the Attention U-Net, there was no statistically significant difference. On the internal test of the cross-over study, the Attention U-Net showed the highest DSC and IoU scores. However, there was no significant difference between the Attention U-Net and Residual U-Net or between the Attention U-Net and U-Net++. On the external test of the cross-over study, all models showed no significant difference from each other. To the best of our knowledge, this is the first study implementing segmentation of PCL on EUS images using a deep-learning approach. Our experimental results show that a deep-learning approach can be applied successfully for PCL segmentation on EUS images.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0252287
Author(s):  
Yingjing Yan ◽  
Defu Zhang

In recent years, the rapid development of deep neural networks has made great progress in automatic organ segmentation from abdominal CT scans. However, automatic segmentation for small organs (e.g., the pancreas) is still a challenging task. As an inconspicuous and small organ in the abdomen, the pancreas has a high degree of anatomical variability and is indistinguishable from the surrounding organs and tissues, which usually leads to a very vague boundary. Therefore, the accuracy of pancreatic segmentation is sometimes below satisfaction. In this paper, we propose a 2.5D U-net with an attention mechanism. The proposed network includes 2D convolutional layers and 3D convolutional layers, which means that it requires less computational resources than 3D segmentation models while it can capture more spatial information along the third dimension than 2D segmentation models. Then We use a cascaded framework to increase the accuracy of segmentation results. We evaluate our network on the NIH pancreas dataset and measure the segmentation accuracy by the Dice similarity coefficient (DSC). Experimental results demonstrate a better performance compared with state-of-the-art methods.


Author(s):  
Jorge F. Lazo ◽  
Aldo Marzullo ◽  
Sara Moccia ◽  
Michele Catellani ◽  
Benoit Rosa ◽  
...  

Abstract Purpose Ureteroscopy is an efficient endoscopic minimally invasive technique for the diagnosis and treatment of upper tract urothelial carcinoma. During ureteroscopy, the automatic segmentation of the hollow lumen is of primary importance, since it indicates the path that the endoscope should follow. In order to obtain an accurate segmentation of the hollow lumen, this paper presents an automatic method based on convolutional neural networks (CNNs). Methods The proposed method is based on an ensemble of 4 parallel CNNs to simultaneously process single and multi-frame information. Of these, two architectures are taken as core-models, namely U-Net based in residual blocks ($$m_1$$ m 1 ) and Mask-RCNN ($$m_2$$ m 2 ), which are fed with single still-frames I(t). The other two models ($$M_1$$ M 1 , $$M_2$$ M 2 ) are modifications of the former ones consisting on the addition of a stage which makes use of 3D convolutions to process temporal information. $$M_1$$ M 1 , $$M_2$$ M 2 are fed with triplets of frames ($$I(t-1)$$ I ( t - 1 ) , I(t), $$I(t+1)$$ I ( t + 1 ) ) to produce the segmentation for I(t). Results The proposed method was evaluated using a custom dataset of 11 videos (2673 frames) which were collected and manually annotated from 6 patients. We obtain a Dice similarity coefficient of 0.80, outperforming previous state-of-the-art methods. Conclusion The obtained results show that spatial-temporal information can be effectively exploited by the ensemble model to improve hollow lumen segmentation in ureteroscopic images. The method is effective also in the presence of poor visibility, occasional bleeding, or specular reflections.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Junyoung Park ◽  
Jae Sung Lee ◽  
Dongkyu Oh ◽  
Hyun Gee Ryoo ◽  
Jeong Hee Han ◽  
...  

AbstractQuantitative single-photon emission computed tomography/computed tomography (SPECT/CT) using Tc-99m pertechnetate aids in evaluating salivary gland function. However, gland segmentation and quantitation of gland uptake is challenging. We develop a salivary gland SPECT/CT with automated segmentation using a deep convolutional neural network (CNN). The protocol comprises SPECT/CT at 20 min, sialagogue stimulation, and SPECT at 40 min post-injection of Tc-99m pertechnetate (555 MBq). The 40-min SPECT was reconstructed using the 20-min CT after misregistration correction. Manual salivary gland segmentation for %injected dose (%ID) by human experts proved highly reproducible, but took 15 min per scan. An automatic salivary segmentation method was developed using a modified 3D U-Net for end-to-end learning from the human experts (n = 333). The automatic segmentation performed comparably with human experts in voxel-wise comparison (mean Dice similarity coefficient of 0.81 for parotid and 0.79 for submandibular, respectively) and gland %ID correlation (R2 = 0.93 parotid, R2 = 0.95 submandibular) with an operating time less than 1 min. The algorithm generated results that were comparable to the reference data. In conclusion, with the aid of a CNN, we developed a quantitative salivary gland SPECT/CT protocol feasible for clinical applications. The method saves analysis time and manual effort while reducing patients’ radiation exposure.


2022 ◽  
Vol 3 (2) ◽  
pp. 1-15
Author(s):  
Junqian Zhang ◽  
Yingming Sun ◽  
Hongen Liao ◽  
Jian Zhu ◽  
Yuan Zhang

Radiation-induced xerostomia, as a major problem in radiation treatment of the head and neck cancer, is mainly due to the overdose irradiation injury to the parotid glands. Helical Tomotherapy-based megavoltage computed tomography (MVCT) imaging during the Tomotherapy treatment can be applied to monitor the successive variations in the parotid glands. While manual segmentation is time consuming, laborious, and subjective, automatic segmentation is quite challenging due to the complicated anatomical environment of head and neck as well as noises in MVCT images. In this article, we propose a localization-refinement scheme to segment the parotid gland in MVCT. After data pre-processing we use mask region convolutional neural network (Mask R-CNN) in the localization stage after data pre-processing, and design a modified U-Net in the following fine segmentation stage. To the best of our knowledge, this study is a pioneering work of deep learning on MVCT segmentation. Comprehensive experiments based on different data distribution of head and neck MVCTs and different segmentation models have demonstrated the superiority of our approach in terms of accuracy, effectiveness, flexibility, and practicability. Our method can be adopted as a powerful tool for radiation-induced injury studies, where accurate organ segmentation is crucial.


2021 ◽  
Author(s):  
Wing Keung Cheung ◽  
Robert Bell ◽  
Arjun Nair ◽  
Leon Menezies ◽  
Riyaz Patel ◽  
...  

AbstractA fully automatic two-dimensional Unet model is proposed to segment aorta and coronary arteries in computed tomography images. Two models are trained to segment two regions of interest, (1) the aorta and the coronary arteries or (2) the coronary arteries alone. Our method achieves 91.20% and 88.80% dice similarity coefficient accuracy on regions of interest 1 and 2 respectively. Compared with a semi-automatic segmentation method, our model performs better when segmenting the coronary arteries alone. The performance of the proposed method is comparable to existing published two-dimensional or three-dimensional deep learning models. Furthermore, the algorithmic and graphical processing unit memory efficiencies are maintained such that the model can be deployed within hospital computer networks where graphical processing units are typically not available.


2020 ◽  
Vol 9 (8) ◽  
pp. 2537
Author(s):  
Joan M. Nunez do Rio ◽  
Piyali Sen ◽  
Rajna Rasheed ◽  
Akanksha Bagchi ◽  
Luke Nicholson ◽  
...  

Reliable outcome measures are required for clinical trials investigating novel agents for preventing progression of capillary non-perfusion (CNP) in retinal vascular diseases. Currently, accurate quantification of topographical distribution of CNP on ultrawide field fluorescein angiography (UWF-FA) by retinal experts is subjective and lack standardisation. A U-net style network was trained to extract a dense segmentation of CNP from a newly created dataset of 75 UWF-FA images. A subset of 20 images was also segmented by a second expert grader for inter-grader reliability evaluation. Further, a circular grid centred on the FAZ was used to provide standardised CNP distribution analysis. The model for dense segmentation was five-fold cross-validated achieving area under the receiving operating characteristic of 0.82 (0.03) and area under precision-recall curve 0.73 (0.05). Inter-grader assessment on the 20 image subset achieves: precision 59.34 (10.92), recall 76.99 (12.5), and dice similarity coefficient (DSC) 65.51 (4.91), and the centred operating point of the automated model reached: precision 64.41 (13.66), recall 70.02 (16.2), and DSC 66.09 (13.32). Agreement of CNP grid assessment reached: Kappa 0.55 (0.03), perfused intraclass correlation (ICC) 0.89 (0.77, 0.93), non-perfused ICC 0.86 (0.73, 0.92), inter-grader agreement of CNP grid assessment values are Kappa 0.43 (0.03), perfused ICC 0.70 (0.48, 0.83), non-perfused ICC 0.71 (0.48, 0.83). Automated dense segmentation of CNP in UWF-FA images achieves performance levels comparable to inter-grader agreement values. A grid placed on the deep learning-based automatic segmentation of CNP generates a reliable and quantifiable method of measurement of CNP, to overcome the subjectivity of human graders.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaofang Gou ◽  
Yuming Rao ◽  
Xiuxia Feng ◽  
Zhaoqiang Yun ◽  
Wei Yang

Automatic segmentation of ulna and radius (UR) in forearm radiographs is a necessary step for single X-ray absorptiometry bone mineral density measurement and diagnosis of osteoporosis. Accurate and robust segmentation of UR is difficult, given the variation in forearms between patients and the nonuniformity intensity in forearm radiographs. In this work, we proposed a practical automatic UR segmentation method through the dynamic programming (DP) algorithm to trace UR contours. Four seed points along four UR diaphysis edges are automatically located in the preprocessed radiographs. Then, the minimum cost paths in a cost map are traced from the seed points through the DP algorithm as UR edges and are merged as the UR contours. The proposed method is quantitatively evaluated using 37 forearm radiographs with manual segmentation results, including 22 normal-exposure and 15 low-exposure radiographs. The average Dice similarity coefficient of our method reached 0.945. The average mean absolute distance between the contours extracted by our method and a radiologist is only 5.04 pixels. The segmentation performance of our method between the normal- and low-exposure radiographs was insignificantly different. Our method was also validated on 105 forearm radiographs acquired under various imaging conditions from several hospitals. The results demonstrated that our method was fairly robust for forearm radiographs of various qualities.


2020 ◽  
Vol 20 (S14) ◽  
Author(s):  
Qingfeng Wang ◽  
Qiyu Liu ◽  
Guoting Luo ◽  
Zhiqin Liu ◽  
Jun Huang ◽  
...  

Abstract Background Pneumothorax (PTX) may cause a life-threatening medical emergency with cardio-respiratory collapse that requires immediate intervention and rapid treatment. The screening and diagnosis of pneumothorax usually rely on chest radiographs. However, the pneumothoraces in chest X-rays may be very subtle with highly variable in shape and overlapped with the ribs or clavicles, which are often difficult to identify. Our objective was to create a large chest X-ray dataset for pneumothorax with pixel-level annotation and to train an automatic segmentation and diagnosis framework to assist radiologists to identify pneumothorax accurately and timely. Methods In this study, an end-to-end deep learning framework is proposed for the segmentation and diagnosis of pneumothorax on chest X-rays, which incorporates a fully convolutional DenseNet (FC-DenseNet) with multi-scale module and spatial and channel squeezes and excitation (scSE) modules. To further improve the precision of boundary segmentation, we propose a spatial weighted cross-entropy loss function to penalize the target, background and contour pixels with different weights. Results This retrospective study are conducted on a total of eligible 11,051 front-view chest X-ray images (5566 cases of PTX and 5485 cases of Non-PTX). The experimental results show that the proposed algorithm outperforms the five state-of-the-art segmentation algorithms in terms of mean pixel-wise accuracy (MPA) with $$0.93\pm 0.13$$ 0.93 ± 0.13 and dice similarity coefficient (DSC) with $$0.92\pm 0.14$$ 0.92 ± 0.14 , and achieves competitive performance on diagnostic accuracy with 93.45% and $$F_1$$ F 1 -score with 92.97%. Conclusion This framework provides substantial improvements for the automatic segmentation and diagnosis of pneumothorax and is expected to become a clinical application tool to help radiologists to identify pneumothorax on chest X-rays.


2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Chen Huang ◽  
Junru Tian ◽  
Chenglang Yuan ◽  
Ping Zeng ◽  
Xueping He ◽  
...  

Objective. Deep vein thrombosis (DVT) is a disease caused by abnormal blood clots in deep veins. Accurate segmentation of DVT is important to facilitate the diagnosis and treatment. In the current study, we proposed a fully automatic method of DVT delineation based on deep learning (DL) and contrast enhanced magnetic resonance imaging (CE-MRI) images. Methods. 58 patients (25 males; 28~96 years old) with newly diagnosed lower extremity DVT were recruited. CE-MRI was acquired on a 1.5 T system. The ground truth (GT) of DVT lesions was manually contoured. A DL network with an encoder-decoder architecture was designed for DVT segmentation. 8-Fold cross-validation strategy was applied for training and testing. Dice similarity coefficient (DSC) was adopted to evaluate the network’s performance. Results. It took about 1.5s for our CNN model to perform the segmentation task in a slice of MRI image. The mean DSC of 58 patients was 0.74± 0.17 and the median DSC was 0.79. Compared with other DL models, our CNN model achieved better performance in DVT segmentation (0.74± 0.17 versus 0.66±0.15, 0.55±0.20, and 0.57±0.22). Conclusion. Our proposed DL method was effective and fast for fully automatic segmentation of lower extremity DVT.


Sign in / Sign up

Export Citation Format

Share Document