Why Are Generative Adversarial Networks Vital for Deep Neural Networks? A Case Study on COVID-19 Chest X-Ray Images

Author(s):  
M. Y. Shams ◽  
O. M. Elzeki ◽  
Mohamed Abd Elfattah ◽  
T. Medhat ◽  
Aboul Ella Hassanien
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 111168-111180 ◽  
Author(s):  
Jinrui Wang ◽  
Shunming Li ◽  
Baokun Han ◽  
Zenghui An ◽  
Huaiqian Bao ◽  
...  

Author(s):  
Ming Hou ◽  
Brahim Chaib-draa ◽  
Chao Li ◽  
Qibin Zhao

 In this work, we consider the task of classifying binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal reweighting strategy for U data, so that a decent decision boundary can be found. However, given limited P data, the conventional PU models tend to suffer from overfitting when adapted to very flexible deep neural networks. In contrast, we are the first to innovate a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GAN). Our generative positive-unlabeled (GenPU) framework incorporates an array of discriminators and generators that are endowed with different roles in simultaneously producing positive and negative realistic samples. We provide theoretical analysis to justify that, at equilibrium, GenPU is capable of recovering both positive and negative data distributions. Moreover, we show GenPU is generalizable and closely related to the semi-supervised classification. Given rather limited P data, experiments on both synthetic and real-world dataset demonstrate the effectiveness of our proposed framework. With infinite realistic and diverse sample streams generated from GenPU, a very flexible classifier can then be trained using deep neural networks.


2019 ◽  
Vol 177 ◽  
pp. 285-296 ◽  
Author(s):  
Johnatan Carvalho Souza ◽  
João Otávio Bandeira Diniz ◽  
Jonnison Lima Ferreira ◽  
Giovanni Lucca França da Silva ◽  
Aristófanes Corrêa Silva ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 153535-153545
Author(s):  
Faizan Munawar ◽  
Shoaib Azmat ◽  
Talha Iqbal ◽  
Christer Gronlund ◽  
Hazrat Ali

Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2896
Author(s):  
Giorgio Ciano ◽  
Paolo Andreini ◽  
Tommaso Mazzierli ◽  
Monica Bianchini ◽  
Franco Scarselli

Multi-organ segmentation of X-ray images is of fundamental importance for computer aided diagnosis systems. However, the most advanced semantic segmentation methods rely on deep learning and require a huge amount of labeled images, which are rarely available due to both the high cost of human resources and the time required for labeling. In this paper, we present a novel multi-stage generation algorithm based on Generative Adversarial Networks (GANs) that can produce synthetic images along with their semantic labels and can be used for data augmentation. The main feature of the method is that, unlike other approaches, generation occurs in several stages, which simplifies the procedure and allows it to be used on very small datasets. The method was evaluated on the segmentation of chest radiographic images, showing promising results. The multi-stage approach achieves state-of-the-art and, when very few images are used to train the GANs, outperforms the corresponding single-stage approach.


2020 ◽  
Author(s):  
Kun Chen ◽  
Manning Wang ◽  
Zhijian Song

Abstract Background: Deep neural networks have been widely used in medical image segmentation and have achieved state-of-the-art performance in many tasks. However, different from the segmentation of natural images or video frames, the manual segmentation of anatomical structures in medical images needs high expertise so the scale of labeled training data is very small, which is a major obstacle for the improvement of deep neural networks performance in medical image segmentation. Methods: In this paper, we proposed a new end-to-end generation-segmentation framework by integrating Generative Adversarial Networks (GAN) and a segmentation network and train them simultaneously. The novelty is that during the training of the GAN, the intermediate synthetic images generated by the generator of the GAN are used to pre-train the segmentation network. As the advances of the training of the GAN, the synthetic images evolve gradually from being very coarse to containing more realistic textures, and these images help train the segmentation network gradually. After the training of GAN, the segmentation network is then fine-tuned by training with the real labeled images. Results: We evaluated the proposed framework on four different datasets, including 2D cardiac dataset and lung dataset, 3D prostate dataset and liver dataset. Compared with original U-net and CE-Net, our framework can achieve better segmentation performance. Our framework also can get better segmentation results than U-net on small datasets. In addition, our framework is more effective than the usual data augmentation methods. Conclusions: The proposed framework can be used as a pre-train method of segmentation network, which helps to get a better segmentation result. Our method can solve the shortcomings of current data augmentation methods to some extent.


Sign in / Sign up

Export Citation Format

Share Document