scholarly journals Cross Data Set Generalization of Ultrasound Image Augmentation using Representation Learning: A Case Study

2021 ◽  
Vol 7 (2) ◽  
pp. 755-758
Author(s):  
Daniel Wulff ◽  
Mohamad Mehdi ◽  
Floris Ernst ◽  
Jannis Hagenah

Abstract Data augmentation is a common method to make deep learning assessible on limited data sets. However, classical image augmentation methods result in highly unrealistic images on ultrasound data. Another approach is to utilize learning-based augmentation methods, e.g. based on variational autoencoders or generative adversarial networks. However, a large amount of data is necessary to train these models, which is typically not available in scenarios where data augmentation is needed. One solution for this problem could be a transfer of augmentation models between different medical imaging data sets. In this work, we present a qualitative study of the cross data set generalization performance of different learning-based augmentation methods for ultrasound image data. We could show that knowledge transfer is possible in ultrasound image augmentation and that the augmentation partially results in semantically meaningful transfers of structures, e.g. vessels, across domains.

2019 ◽  
Author(s):  
Bruno H. L. dos Anjos ◽  
Anthony E. A. Jatobá ◽  
Marcelo C. Oliveira

Obtaining medical images is an ethically restrictive process and still difficult to validate, depending on well-trained professional, being a laborious and time-consuming activity. Therefore, the construction of large databases of structured medical images is one of the major challenges of the deep learning applications in the computerized aid to the diagnosis in medical images. GAN presents itself as an adequate solution to supply the small number of pathological exams that compose the most diverse medical images banks. In this work we intend to develop a method using GAN to balance the data set of Computed Tomography images and improve the performance of an arbitrary classifier of pulmonary nodules. For this, two GAN architectures with the capacity to generate synthetic images of nodal cuts were trained and in a second moment, a Convolutional Neural Network was trained in rooms of different data set. The training of the different data sets were evaluated by AUC-ROC.


2021 ◽  
Vol 7 (12) ◽  
pp. 254
Author(s):  
Loris Nanni ◽  
Michelangelo Paci ◽  
Sheryl Brahnam ◽  
Alessandra Lumini

Convolutional neural networks (CNNs) have gained prominence in the research literature on image classification over the last decade. One shortcoming of CNNs, however, is their lack of generalizability and tendency to overfit when presented with small training sets. Augmentation directly confronts this problem by generating new data points providing additional information. In this paper, we investigate the performance of more than ten different sets of data augmentation methods, with two novel approaches proposed here: one based on the discrete wavelet transform and the other on the constant-Q Gabor transform. Pretrained ResNet50 networks are finetuned on each augmentation method. Combinations of these networks are evaluated and compared across four benchmark data sets of images representing diverse problems and collected by instruments that capture information at different scales: a virus data set, a bark data set, a portrait dataset, and a LIGO glitches data set. Experiments demonstrate the superiority of this approach. The best ensemble proposed in this work achieves state-of-the-art (or comparable) performance across all four data sets. This result shows that varying data augmentation is a feasible way for building an ensemble of classifiers for image classification.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4203 ◽  
Author(s):  
Qingyun Li ◽  
Zhibin Yu ◽  
Yubo Wang ◽  
Haiyong Zheng

The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.


2021 ◽  
Vol 11 (21) ◽  
pp. 10224
Author(s):  
Hsu-Yung Cheng ◽  
Chih-Chang Yu

In this paper, a framework based on generative adversarial networks is proposed to perform nature-scenery generation according to descriptions from the users. The desired place, time and seasons of the generated scenes can be specified with the help of text-to-image generation techniques. The framework improves and modifies the architecture of a generative adversarial network with attention models by adding the imagination models. The proposed attentional and imaginative generative network uses the hidden layer information to initialize the memory cell of the recurrent neural network to produce the desired photos. A data set containing different categories of scenery images is established to train the proposed system. The experiments validate that the proposed method is able to increase the quality and diversity of the generated images compared to the existing method. A possible application of road image generation for data augmentation is also demonstrated in the experimental results.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Veit Sandfort ◽  
Ke Yan ◽  
Perry J. Pickhardt ◽  
Ronald M. Summers

AbstractLabeled medical imaging data is scarce and expensive to generate. To achieve generalizable deep learning models large amounts of data are needed. Standard data augmentation is a method to increase generalizability and is routinely performed. Generative adversarial networks offer a novel method for data augmentation. We evaluate the use of CycleGAN for data augmentation in CT segmentation tasks. Using a large image database we trained a CycleGAN to transform contrast CT images into non-contrast images. We then used the trained CycleGAN to augment our training using these synthetic non-contrast images. We compared the segmentation performance of a U-Net trained on the original dataset compared to a U-Net trained on the combined dataset of original data and synthetic non-contrast images. We further evaluated the U-Net segmentation performance on two separate datasets: The original contrast CT dataset on which segmentations were created and a second dataset from a different hospital containing only non-contrast CTs. We refer to these 2 separate datasets as the in-distribution and out-of-distribution datasets, respectively. We show that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data. For example, when training the model with standard augmentation techniques, performance of segmentation of the kidneys on out-of-distribution non-contrast images was dramatically lower than for in-distribution data (Dice score of 0.09 vs. 0.94 for out-of-distribution vs. in-distribution data, respectively, p < 0.001). When the kidney model was trained with CycleGAN augmentation techniques, the out-of-distribution (non-contrast) performance increased dramatically (from a Dice score of 0.09 to 0.66, p < 0.001). Improvements for the liver and spleen were smaller, from 0.86 to 0.89 and 0.65 to 0.69, respectively. We believe this method will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.


Author(s):  
Oleksandr Chaikovskyi ◽  
Artem Volokyta ◽  
Artemi Kyrianov ◽  
Heorhii Loutskii

The article discusses a data augmentation method based on generative adversarial networks to improve the accuracy of image classification by convolutional neural networks. A comparative analysis of the proposed method with classical image augmentation methods was performed.


Author(s):  
Loris Nanni ◽  
Michelangelo Paci ◽  
Sheryl Brahnam ◽  
Alessandra Lumini

Convolutional Neural Networks (CNNs) have gained prominence in the research literature on image classification over the last decade. One shortcoming of CNNs, however, is their lack of generalizability and tendency to overfit when presented with small training sets. Augmentation directly confronts this problem by generating new data points providing additional information. In this paper, we investigate the performance of more than ten different sets of data augmentation methods, with two novel approaches proposed here: one based on the Discrete Wavelet Transform and the other on the Constant-Q Gabor transform. Pretrained ResNet50 networks are finetuned on each augmentation method. Combinations of these networks are evaluated and compared across three benchmark data sets of images representing diverse problems and collected by instruments that capture information at different scales: a virus data set, a bark data set, and a LIGO glitches data set. Experiments demonstrate the superiority of this approach. The best ensemble proposed in this work achieves state-of-the-art performance across all three data sets. This result shows that varying data augmentation is a feasible way for building an ensemble of classifiers for image classification (code available at https://github.com/LorisNanni).


Sign in / Sign up

Export Citation Format

Share Document