scholarly journals Cooperative Training and Latent Space Data Augmentation for Robust Medical Image Segmentation

2021 ◽  
pp. 149-159
Author(s):  
Chen Chen ◽  
Kerstin Hammernik ◽  
Cheng Ouyang ◽  
Chen Qin ◽  
Wenjia Bai ◽  
...  
Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2107
Author(s):  
Xin Wei ◽  
Huan Wan ◽  
Fanghua Ye ◽  
Weidong Min

In recent years, medical image segmentation (MIS) has made a huge breakthrough due to the success of deep learning. However, the existing MIS algorithms still suffer from two types of uncertainties: (1) the uncertainty of the plausible segmentation hypotheses and (2) the uncertainty of segmentation performance. These two types of uncertainties affect the effectiveness of the MIS algorithm and then affect the reliability of medical diagnosis. Many studies have been done on the former but ignore the latter. Therefore, we proposed the hierarchical predictable segmentation network (HPS-Net), which consists of a new network structure, a new loss function, and a cooperative training mode. According to our knowledge, HPS-Net is the first network in the MIS area that can generate both the diverse segmentation hypotheses to avoid the uncertainty of the plausible segmentation hypotheses and the measure predictions about these hypotheses to avoid the uncertainty of segmentation performance. Extensive experiments were conducted on the LIDC-IDRI dataset and the ISIC2018 dataset. The results show that HPS-Net has the highest Dice score compared with the benchmark methods, which means it has the best segmentation performance. The results also confirmed that the proposed HPS-Net can effectively predict TNR and TPR.


Author(s):  
Lars J. Isaksson ◽  
Paul Summers ◽  
Sara Raimondi ◽  
Sara Gandini ◽  
Abhir Bhalerao ◽  
...  

Abstract Researchers address the generalization problem of deep image processing networks mainly through extensive use of data augmentation techniques such as random flips, rotations, and deformations. A data augmentation technique called mixup, which constructs virtual training samples from convex combinations of inputs, was recently proposed for deep classification networks. The algorithm contributed to increased performance on classification in a variety of datasets, but so far has not been evaluated for image segmentation tasks. In this paper, we tested whether the mixup algorithm can improve the generalization performance of deep segmentation networks for medical image data. We trained a standard U-net architecture to segment the prostate in 100 T2-weighted 3D magnetic resonance images from prostate cancer patients, and compared the results with and without mixup in terms of Dice similarity coefficient and mean surface distance from a reference segmentation made by an experienced radiologist. Our results suggest that mixup offers a statistically significant boost in performance compared to non-mixup training, leading to up to 1.9% increase in Dice and a 10.9% decrease in surface distance. The mixup algorithm may thus offer an important aid for medical image segmentation applications, which are typically limited by severe data scarcity.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Lin Teng ◽  
Hang Li ◽  
Shahid Karim

Medical image segmentation is one of the hot issues in the related area of image processing. Precise segmentation for medical images is a vital guarantee for follow-up treatment. At present, however, low gray contrast and blurred tissue boundaries are common in medical images, and the segmentation accuracy of medical images cannot be effectively improved. Especially, deep learning methods need more training samples, which lead to time-consuming process. Therefore, we propose a novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) in this article. First, we extract the region of interest from the raw medical images. Then, data augmentation is operated to acquire more training datasets. Our proposed method contains three models: encoder, U-net, and decoder. Encoder is mainly responsible for feature extraction of 2D image slice. The U-net cascades the features of each block of the encoder with those obtained by deconvolution in the decoder under different scales. The decoding is mainly responsible for the upsampling of the feature graph after feature extraction of each group. Simulation results show that the new method can boost the segmentation accuracy. And, it has strong robustness compared with other segmentation methods.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


2020 ◽  
Author(s):  
Hong Liu ◽  
Haichao Cao ◽  
Enmin Song ◽  
Guangzhi Ma ◽  
Xiangyang Xu ◽  
...  

2020 ◽  
Author(s):  
Kun Chen ◽  
Manning Wang ◽  
Zhijian Song

Abstract Background: Deep neural networks have been widely used in medical image segmentation and have achieved state-of-the-art performance in many tasks. However, different from the segmentation of natural images or video frames, the manual segmentation of anatomical structures in medical images needs high expertise so the scale of labeled training data is very small, which is a major obstacle for the improvement of deep neural networks performance in medical image segmentation. Methods: In this paper, we proposed a new end-to-end generation-segmentation framework by integrating Generative Adversarial Networks (GAN) and a segmentation network and train them simultaneously. The novelty is that during the training of the GAN, the intermediate synthetic images generated by the generator of the GAN are used to pre-train the segmentation network. As the advances of the training of the GAN, the synthetic images evolve gradually from being very coarse to containing more realistic textures, and these images help train the segmentation network gradually. After the training of GAN, the segmentation network is then fine-tuned by training with the real labeled images. Results: We evaluated the proposed framework on four different datasets, including 2D cardiac dataset and lung dataset, 3D prostate dataset and liver dataset. Compared with original U-net and CE-Net, our framework can achieve better segmentation performance. Our framework also can get better segmentation results than U-net on small datasets. In addition, our framework is more effective than the usual data augmentation methods. Conclusions: The proposed framework can be used as a pre-train method of segmentation network, which helps to get a better segmentation result. Our method can solve the shortcomings of current data augmentation methods to some extent.


2019 ◽  
Vol 25 (3) ◽  
pp. 123-131
Author(s):  
Gyujin Choi ◽  
Jooyeon Shin ◽  
Kyung Joohyun ◽  
Minho Kyung ◽  
Yunjin Lee

Sign in / Sign up

Export Citation Format

Share Document