Segmentation of Intracerebral Hemorrhage based on Improved U-Net

Author(s):  
Cao Guogang ◽  
Wang Yijie ◽  
Zhu Xinyu ◽  
Li Mengxue ◽  
Wang Xiaoyan ◽  
...  

Automatic medical image segmentation effectively aids in stroke diagnosis and treatment. In this article, an improved U-Net neural network for auxiliary diagnosis of intracerebral hemorrhage is proposed, which can realize the automatic segmentation of hemorrhage from brain CT images. The pixels of brain CT images are first clustered into four classes: gray matter, white matter, cerebrospinal fluid, and hemorrhage by fuzzy c-means (FCM) clustering, followed by the removal of the skull by morphological imaging, and finally an improved U-Net neural network model is proposed to automatically segment hemorrhages from the brain CT images. Experiment results showed that the objective function of binary cross-entropy was better than dice loss and focal loss for the proposed method. Its dice similarity coefficient reached 0.860 ± 0.031, which was better than the methods of white matter FCM clustering and multipath context generation adversarial networking. This improved method dramatically enhanced the accuracy of segmentation for intracerebral hemorrhage.

2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


Author(s):  
Jialiang Jiang ◽  
Yong Luo ◽  
Feng Wang ◽  
Yuchuan Fu ◽  
Hang Yu ◽  
...  

: Purpose: To evaluate the accuracy and dosimetric effects for auto-segmentation of the CTV for GO in CT images based on FCN. Methods: An FCN-8s network architecture for auto-segmentation was built based on Caffe. CT images of 121 patients with GO who have received radiotherapy at the West China Hospital of Sichuan University were randomly selected for training and testing. Two methods were used to segment the CTV of GO: treating the two-part CTV as a whole anatomical region or considering the two parts of CTV as two independent regions. Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD) were used as evaluation criteria. The auto-segmented contours were imported into the original treatment plan to analysis the dosimetric characteristics. Results: The similarity comparison between manual contours and auto-segmental contours showed an average DSC value up to 0.83. The max HD values for segmenting two parts of CTV separately was a little bit smaller than treating CTV with one label (8.23±2.80 vs. 9.03±2.78). The dosimetric comparison between manual contours and auto-segmental contours showed there was a significant difference (p<0.05) with the lack of dose for auto-segmental CTV. Conclusion: Based on deep learning architecture, the automatic segmentation model for small target area can carry out auto contouring task well. Treating separate parts of one target as different anatomic regions can help to improve the auto-contouring quality. The dosimetric evaluation can provide us with different perspectives for further exploration of automatic sketching tools.


2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
K. Gayathri Devi ◽  
R. Radhakrishnan

Purpose. Colon segmentation is an essential step in the development of computer-aided diagnosis systems based on computed tomography (CT) images. The requirement for the detection of the polyps which lie on the walls of the colon is much needed in the field of medical imaging for diagnosis of colorectal cancer.Methods. The proposed work is focused on designing an efficient automatic colon segmentation algorithm from abdominal slices consisting of colons, partial volume effect, bowels, and lungs. The challenge lies in determining the exact colon enhanced with partial volume effect of the slice. In this work, adaptive thresholding technique is proposed for the segmentation of air packets, machine learning based cascade feed forward neural network enhanced with boundary detection algorithms are used which differentiate the segments of the lung and the fluids which are sediment at the side wall of colon and by rejecting bowels based on the slice difference removal method. The proposed neural network method is trained with Bayesian regulation algorithm to determine the partial volume effect.Results. Experiment was conducted on CT database images which results in 98% accuracy and minimal error rate.Conclusions. The main contribution of this work is the exploitation of neural network algorithm for removal of opacified fluid to attain desired colon segmentation result.


2019 ◽  
Author(s):  
Wei Wang ◽  
Mingang Wang ◽  
Xiaofen Wu ◽  
Xie Ding ◽  
Xuexiang Cao ◽  
...  

Abstract Background: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. However, the complexity of the prostate gland hampers accurate segmentation from other tissues. Thus, we propose the automatic prostate segmentation method SegDGAN, which is based on a classic generative adversarial network (GAN) model. Methods: The proposed method comprises a fully convolutional generation network of densely connected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, fully convolution network, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the publicly available dataset PROMISE12. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), volumetric overlap error (VOE), average surface distance (ASD), and Hausdorff distance (HD) were also used to compare the accuracy of segmentation between these methods. Results: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 23.47% and the lowest ASD values of 0.46 mm with the clinical dataset. In addition, the highest DCS value of 88.69%, the lowest VOE value of 23.47%, the lowest ASD value of 0.83 mm, and the lowest HD value of 11.40 mm was achieved with the PROMISE12 dataset. Conclusions: Our experimental results show that the SegDGAN model outperforms other segmentation methods Keywords: Automatic segmentation, Generative adversarial networks, Magnetic resonance imaging, Prostate


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Hongmei Yuan ◽  
Minglei Yang ◽  
Shan Qian ◽  
Wenxin Wang ◽  
Xiaotian Jia ◽  
...  

Abstract Background Image registration is an essential step in the automated interpretation of the brain computed tomography (CT) images of patients with acute cerebrovascular disease (ACVD). However, performing brain CT registration accurately and rapidly remains greatly challenging due to the large intersubject anatomical variations, low resolution of soft tissues, and heavy computation costs. To this end, the HSCN-Net, a hybrid supervised convolutional neural network, was developed for precise and fast brain CT registration. Method HSCN-Net generated synthetic deformation fields using a simulator as one supervision for one reference–moving image pair to address the problem of lack of gold standards. Furthermore, the simulator was designed to generate multiscale affine and elastic deformation fields to overcome the registration challenge posed by large intersubject anatomical deformation. Finally, HSCN-Net adopted a hybrid loss function constituted by deformation field and image similarity to improve registration accuracy and generalization capability. In this work, 101 CT images of patients were collected for model construction (57), evaluation (14), and testing (30). HSCN-Net was compared with the classical Demons and VoxelMorph models. Qualitative analysis through the visual evaluation of critical brain tissues and quantitative analysis by determining the endpoint error (EPE) between the predicted sparse deformation vectors and gold-standard sparse deformation vectors, image normalized mutual information (NMI), and the Dice coefficient of the middle cerebral artery (MCA) blood supply area were carried out to assess model performance comprehensively. Results HSCN-Net and Demons had a better visual spatial matching performance than VoxelMorph, and HSCN-Net was more competent for smooth and large intersubject deformations than Demons. The mean EPE of HSCN-Net (3.29 mm) was less than that of Demons (3.47 mm) and VoxelMorph (5.12 mm); the mean Dice of HSCN-Net was 0.96, which was higher than that of Demons (0.90) and VoxelMorph (0.87); and the mean NMI of HSCN-Net (0.83) was slightly lower than that of Demons (0.84), but higher than that of VoxelMorph (0.81). Moreover, the mean registration time of HSCN-Net (17.86 s) was shorter than that of VoxelMorph (18.53 s) and Demons (147.21 s). Conclusion The proposed HSCN-Net could achieve accurate and rapid intersubject brain CT registration.


Sign in / Sign up

Export Citation Format

Share Document