scholarly journals Chest X-ray pneumothorax segmentation using U-Net with EfficientNet and ResNet architectures

2021 ◽  
Vol 7 ◽  
pp. e607
Author(s):  
Ayat Abedalla ◽  
Malak Abdullah ◽  
Mahmoud Al-Ayyoub ◽  
Elhadj Benkhelifa

Medical imaging refers to visualization techniques to provide valuable information about the internal structures of the human body for clinical applications, diagnosis, treatment, and scientific research. Segmentation is one of the primary methods for analyzing and processing medical images, which helps doctors diagnose accurately by providing detailed information on the body’s required part. However, segmenting medical images faces several challenges, such as requiring trained medical experts and being time-consuming and error-prone. Thus, it appears necessary for an automatic medical image segmentation system. Deep learning algorithms have recently shown outstanding performance for segmentation tasks, especially semantic segmentation networks that provide pixel-level image understanding. By introducing the first fully convolutional network (FCN) for semantic image segmentation, several segmentation networks have been proposed on its basis. One of the state-of-the-art convolutional networks in the medical image field is U-Net. This paper presents a novel end-to-end semantic segmentation model, named Ens4B-UNet, for medical images that ensembles four U-Net architectures with pre-trained backbone networks. Ens4B-UNet utilizes U-Net’s success with several significant improvements by adapting powerful and robust convolutional neural networks (CNNs) as backbones for U-Nets encoders and using the nearest-neighbor up-sampling in the decoders. Ens4B-UNet is designed based on the weighted average ensemble of four encoder-decoder segmentation models. The backbone networks of all ensembled models are pre-trained on the ImageNet dataset to exploit the benefit of transfer learning. For improving our models, we apply several techniques for training and predicting, including stochastic weight averaging (SWA), data augmentation, test-time augmentation (TTA), and different types of optimal thresholds. We evaluate and test our models on the 2019 Pneumothorax Challenge dataset, which contains 12,047 training images with 12,954 masks and 3,205 test images. Our proposed segmentation network achieves a 0.8608 mean Dice similarity coefficient (DSC) on the test set, which is among the top one-percent systems in the Kaggle competition.

Author(s):  
Lars J. Isaksson ◽  
Paul Summers ◽  
Sara Raimondi ◽  
Sara Gandini ◽  
Abhir Bhalerao ◽  
...  

Abstract Researchers address the generalization problem of deep image processing networks mainly through extensive use of data augmentation techniques such as random flips, rotations, and deformations. A data augmentation technique called mixup, which constructs virtual training samples from convex combinations of inputs, was recently proposed for deep classification networks. The algorithm contributed to increased performance on classification in a variety of datasets, but so far has not been evaluated for image segmentation tasks. In this paper, we tested whether the mixup algorithm can improve the generalization performance of deep segmentation networks for medical image data. We trained a standard U-net architecture to segment the prostate in 100 T2-weighted 3D magnetic resonance images from prostate cancer patients, and compared the results with and without mixup in terms of Dice similarity coefficient and mean surface distance from a reference segmentation made by an experienced radiologist. Our results suggest that mixup offers a statistically significant boost in performance compared to non-mixup training, leading to up to 1.9% increase in Dice and a 10.9% decrease in surface distance. The mixup algorithm may thus offer an important aid for medical image segmentation applications, which are typically limited by severe data scarcity.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Lin Teng ◽  
Hang Li ◽  
Shahid Karim

Medical image segmentation is one of the hot issues in the related area of image processing. Precise segmentation for medical images is a vital guarantee for follow-up treatment. At present, however, low gray contrast and blurred tissue boundaries are common in medical images, and the segmentation accuracy of medical images cannot be effectively improved. Especially, deep learning methods need more training samples, which lead to time-consuming process. Therefore, we propose a novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) in this article. First, we extract the region of interest from the raw medical images. Then, data augmentation is operated to acquire more training datasets. Our proposed method contains three models: encoder, U-net, and decoder. Encoder is mainly responsible for feature extraction of 2D image slice. The U-net cascades the features of each block of the encoder with those obtained by deconvolution in the decoder under different scales. The decoding is mainly responsible for the upsampling of the feature graph after feature extraction of each group. Simulation results show that the new method can boost the segmentation accuracy. And, it has strong robustness compared with other segmentation methods.


2021 ◽  
Vol 68 ◽  
pp. 101907
Author(s):  
Neerav Karani ◽  
Ertunc Erdil ◽  
Krishna Chaitanya ◽  
Ender Konukoglu

Symmetry ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 145 ◽  
Author(s):  
Zheng Lu ◽  
Dali Chen

Weakly supervised and semi-supervised semantic segmentation has been widely used in the field of computer vision. Since it does not require groundtruth or it only needs a small number of groundtruths for training. Recently, some works use pseudo groundtruths which are generated by a classified network to train the model, however, this method is not suitable for medical image segmentation. To tackle this challenging problem, we use the GrabCut method to generate the pseudo groundtruths in this paper, and then we train the network based on a modified U-net model with the generated pseudo groundtruths, finally we utilize a small amount of groundtruths to fine tune the model. Extensive experiments on the challenging RIM-ONE and DRISHTI-GS benchmarks strongly demonstrate the effectiveness of our algorithm. We obtain state-of-art results on RIM-ONE and DRISHTI-GS databases.


Author(s):  
S. DivyaMeena ◽  
M. Mangaleswaran

Medical images have made a great effect on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Medical Image Segmentation is the development of programmed or semi-automatic detection of limitations within a 2D or 3D image. In medical field, image segmentation is one of the vital steps in Image identification and Object recognition. Image segmentation is a method in which large data is partitioned into small amount of data. If the input MRI image is segmented then identifying the lump attacked region will be easier for physicians. In recent days, many algorithms are proposed for the image segmentation. In this paper, an analysis is made on various segmentation algorithms for medical images. Furthermore, a comparison of existing segmentation algorithms is also discussed along with the performance measure of each.


2019 ◽  
Vol 9 (8) ◽  
pp. 1705-1716
Author(s):  
Shidu Dong ◽  
Zhi Liu ◽  
Huaqiu Wang ◽  
Yihao Zhang ◽  
Shaoguo Cui

To exploit three-dimensional (3D) context information and improve 3D medical image semantic segmentation, we propose a separate 3D (S3D) convolution neural network (CNN) architecture. First, a two-dimensional (2D) CNN is used to extract the 2D features of each slice in the xy-plane of 3D medical images. Second, one-dimensional (1D) features reassembled from the 2D features in the z-axis are input into a 1D-CNN and are then classified feature-wise. Analysis shows that S3D-CNN has lower time complexity, fewer parameters and less memory space requirements than other 3D-CNNs with a similar structure. As an example, we extend the deep convolutional encoder–decoder architecture (SegNet) to S3D-SegNet for brain tumor image segmentation. We also propose a method based on priority queues and the dice loss function to address the class imbalance for medical image segmentation. The experimental results show the following: (1) S3D-SegNet extended from SegNet can improve brain tumor image segmentation. (2) The proposed imbalance accommodation method can increase the speed of training convergence and reduce the negative impact of the imbalance. (3) S3D-SegNet with the proposed imbalance accommodation method offers performance comparable to that of some state-of-the-art 3D-CNNs and experts in brain tumor image segmentation.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


Author(s):  
Maria Papadogiorgaki ◽  
Vasileios Mezaris ◽  
Yiannis Chatzizisis

Images have constituted an essential data source in medicine in the last decades. Medical images derived from diagnostic technologies (e.g., X-ray, ultrasound, computed tomography, magnetic resonance, nuclear imaging) are used to improve the existing diagnostic systems for clinical purposes, but also to facilitate medical research. Hence, medical image processing techniques are constantly investigated and evolved. Medical image segmentation is the primary stage to the visualization and clinical analysis of human tissues. It refers to the segmentation of known anatomic structures from medical images. Structures of interest include organs or parts thereof, such as cardiac ventricles or kidneys, abnormalities such as tumors and cysts, as well as other structures such as bones, vessels, brain structures and so forth. The overall objective of such methods is referred to as computer-aided diagnosis; in other words, they are used for assisting doctors in evaluating medical imagery or in recognizing abnormal findings in a medical image. In contrast to generic segmentation methods, techniques used for medical image segmentation are often applicationspecific; as such, they can make use of prior knowledge for the particular objects of interest and other expected or possible structures in the image. This has led to the development of a wide range of segmentation methods addressing specific problems in medical applications. In the sequel of this article, the analysis of medical visual information generated by three different medical imaging processes will be discussed in detail: Magnetic Resonance Imaging (MRI), Mammography, and Intravascular Ultrasound (IVUS). Clearly, in addition to the aforementioned imaging processes and the techniques for their analysis that are discussed in the sequel, numerous other algorithms for applications of segmentation to specialized medical imagery interpretation exist.


2018 ◽  
Vol 28 (3) ◽  
pp. 220
Author(s):  
Shatha J. Mohammed

The segmentation performance is topic to suitable initialization and best configuration of supervisory parameters. In medical image segmentation, the segmentation is very important when the diagnosing becomes very hard in medical images which are not properly illuminated. This paper proposes segmentation of brain tumour image of MRI images based on spatial fuzzy clustering and level set algorithm. After performance evaluation of the proposed algorithm was carried on brain tumour images, the results showed confirm its effectiveness for medical image segmentation, where the brain tumour is detected properly.


Sign in / Sign up

Export Citation Format

Share Document