scholarly journals RoCGAN: Robust Conditional GAN

2020 ◽  
Vol 128 (10-11) ◽  
pp. 2665-2683 ◽  
Author(s):  
Grigorios G. Chrysos ◽  
Jean Kossaifi ◽  
Stefanos Zafeiriou

Abstract Conditional image generation lies at the heart of computer vision and conditional generative adversarial networks (cGAN) have recently become the method of choice for this task, owing to their superior performance. The focus so far has largely been on performance improvement, with little effort in making cGANs more robust to noise. However, the regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGANs unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Specifically, we augment the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold, even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and establish with both synthetic and real data the merits of our model. We perform a thorough experimental validation on large scale datasets for natural scenes and faces and observe that our model outperforms existing cGAN architectures by a large margin. We also empirically demonstrate the performance of our approach in the face of two types of noise (adversarial and Bernoulli).

2020 ◽  
Author(s):  
Qiao Liu ◽  
Shengquan Chen ◽  
Rui Jiang ◽  
Wing Hung Wong

Recent advances in single-cell technologies, including single-cell ATAC-seq (scATAC-seq), have enabled large-scale profiling of the chromatin accessibility landscape at the single cell level. However, the characteristics of scATAC-seq data, including high sparsity and high dimensionality, have greatly complicated the computational analysis. Here, we proposed scDEC, a computational tool for single cell ATAC-seq analysis with deep generative neural networks. scDEC is built on a pair of generative adversarial networks (GANs), and is capable of learning the latent representation and inferring the cell labels, simultaneously. In a series of experiments, scDEC demonstrates superior performance over other tools in scATAC-seq analysis across multiple datasets and experimental settings. In the downstream applications, we demonstrated that the generative power of scDEC helps to infer the trajectory and intermediate state of cells during differentiation and the latent features learned by scDEC can potentially reveal both biological cell types and within-cell-type variations.


2020 ◽  
Author(s):  
Congmei Jiang ◽  
Yongfang Mao ◽  
Yi Chai ◽  
Mingbiao Yu

<p>With the increasing penetration of renewable resources such as wind and solar, the operation and planning of power systems, especially in terms of large-scale integration, are faced with great risks due to the inherent stochasticity of natural resources. Although this uncertainty can be anticipated, the timing, magnitude, and duration of fluctuations cannot be predicted accurately. In addition, the outputs of renewable power sources are correlated in space and time, and this brings further challenges for predicting the characteristics of their future behavior. To address these issues, this paper describes an unsupervised method for renewable scenario forecasts that considers spatiotemporal correlations based on generative adversarial networks (GANs), which have been shown to generate high-quality samples. We first utilized an improved GAN to learn unknown data distributions and model the dynamic processes of renewable resources. We then generated a large number of forecasted scenarios using stochastic constrained optimization. For validation, we used power-generation data from the National Renewable Energy Laboratory wind and solar integration datasets. The experimental results validated the effectiveness of our proposed method and indicated that it has significant potential in renewable scenario analysis.</p>


2019 ◽  
Vol 214 ◽  
pp. 06025
Author(s):  
Jean-Roch Vlimant ◽  
Felice Pantaleo ◽  
Maurizio Pierini ◽  
Vladimir Loncar ◽  
Sofia Vallecorsa ◽  
...  

In recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collider environment. Training of neural network models has been made tractable with the improvement of optimization methods and the advent of GP-GPU well adapted to tackle the highly-parallelizable task of training neural nets. Despite these advancements, training of large models over large data sets can take days to weeks. Even more so, finding the best model architecture and settings can take many expensive trials. To get the best out of this new technology, it is important to scale up the available network-training resources and, consequently, to provide tools for optimal large-scale distributed training. In this context, our development of a new training workflow, which scales on multi-node/multi-GPU architectures with an eye to deployment on high performance computing machines is described. We describe the integration of hyper parameter optimization with a distributed training framework using Message Passing Interface, for models defined in keras [12] or pytorch [13]. We present results on the speedup of training generative adversarial networks trained on a data set composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter.


2019 ◽  
Vol 9 (18) ◽  
pp. 3856 ◽  
Author(s):  
Dan Zhao ◽  
Baolong Guo ◽  
Yunyi Yan

Over the last few years, image completion has made significant progress due to the generative adversarial networks (GANs) that are able to synthesize photorealistic contents. However, one of the main obstacles faced by many existing methods is that they often create blurry textures or distorted structures that are inconsistent with surrounding regions. The main reason is the ineffectiveness of disentangling style latent space implicitly from images. To address this problem, we develop a novel image completion framework called PIC-EC: parallel image completion networks with edge and color maps, which explicitly provides image edge and color information as the prior knowledge for image completion. The PIC-EC framework consists of the parallel edge and color generators followed by an image completion network. Specifically, the parallel paths generate edge and color maps for the missing region at the same time, and then the image completion network fills the missing region with fine details using the generated edge and color information as the priors. The proposed method was evaluated over CelebA-HQ and Paris StreetView datasets. Experimental results demonstrate that PIC-EC achieves superior performance on challenging cases with complex compositions and outperforms existing methods on evaluations of realism and accuracy, both quantitatively and qualitatively.


2020 ◽  
Vol 496 (1) ◽  
pp. L54-L58 ◽  
Author(s):  
Kana Moriwaki ◽  
Nina Filippova ◽  
Masato Shirasaki ◽  
Naoki Yoshida

ABSTRACT Line intensity mapping (LIM) is an emerging observational method to study the large-scale structure of the Universe and its evolution. LIM does not resolve individual sources but probes the fluctuations of integrated line emissions. A serious limitation with LIM is that contributions of different emission lines from sources at different redshifts are all confused at an observed wavelength. We propose a deep learning application to solve this problem. We use conditional generative adversarial networks to extract designated information from LIM. We consider a simple case with two populations of emission-line galaxies; H $\rm \alpha$ emitting galaxies at $z$ = 1.3 are confused with [O iii] emitters at $z$ = 2.0 in a single observed waveband at 1.5 $\mu{\textrm m}$. Our networks trained with 30 000 mock observation maps are able to extract the total intensity and the spatial distribution of H $\rm \alpha$ emitting galaxies at $z$ = 1.3. The intensity peaks are successfully located with 74 per cent precision. The precision increases to 91 per cent when we combine five networks. The mean intensity and the power spectrum are reconstructed with an accuracy of ∼10 per cent. The extracted galaxy distributions at a wider range of redshift can be used for studies on cosmology and on galaxy formation and evolution.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hui Liu ◽  
Tinglong Tang ◽  
Jake Luo ◽  
Meng Zhao ◽  
Baole Zheng ◽  
...  

Purpose This study aims to address the challenge of training a detection model for the robot to detect the abnormal samples in the industrial environment, while abnormal patterns are very rare under this condition. Design/methodology/approach The authors propose a new model with double encoder–decoder (DED) generative adversarial networks to detect anomalies when the model is trained without any abnormal patterns. The DED approach is used to map high-dimensional input images to a low-dimensional space, through which the latent variables are obtained. Minimizing the change in the latent variables during the training process helps the model learn the data distribution. Anomaly detection is achieved by calculating the distance between two low-dimensional vectors obtained from two encoders. Findings The proposed method has better accuracy and F1 score when compared with traditional anomaly detection models. Originality/value A new architecture with a DED pipeline is designed to capture the distribution of images in the training process so that anomalous samples are accurately identified. A new weight function is introduced to control the proportion of losses in the encoding reconstruction and adversarial phases to achieve better results. An anomaly detection model is proposed to achieve superior performance against prior state-of-the-art approaches.


2021 ◽  
Author(s):  
Muhammad Haris Naveed ◽  
Umair Hashmi ◽  
Nayab Tajved ◽  
Neha Sultan ◽  
Ali Imran

This paper explores whether Generative Adversarial Networks (GANs) can produce realistic network load data that can be utilized to train machine learning models in lieu of real data. In this regard, we evaluate the performance of three recent GAN architectures on the Telecom Italia data set across a set of qualitative and quantitative metrics. Our results show that GAN generated synthetic data is indeed similar to real data and forecasting models trained on this data achieve similar performance to those trained on real data.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260308
Author(s):  
Mauro Castelli ◽  
Luca Manzoni ◽  
Tatiane Espindola ◽  
Aleš Popovič ◽  
Andrea De Lorenzo

Wireless networks are among the fundamental technologies used to connect people. Considering the constant advancements in the field, telecommunication operators must guarantee a high-quality service to keep their customer portfolio. To ensure this high-quality service, it is common to establish partnerships with specialized technology companies that deliver software services in order to monitor the networks and identify faults and respective solutions. A common barrier faced by these specialized companies is the lack of data to develop and test their products. This paper investigates the use of generative adversarial networks (GANs), which are state-of-the-art generative models, for generating synthetic telecommunication data related to Wi-Fi signal quality. We developed, trained, and compared two of the most used GAN architectures: the Vanilla GAN and the Wasserstein GAN (WGAN). Both models presented satisfactory results and were able to generate synthetic data similar to the real ones. In particular, the distribution of the synthetic data overlaps the distribution of the real data for all of the considered features. Moreover, the considered generative models can reproduce the same associations observed for the synthetic features. We chose the WGAN as the final model, but both models are suitable for addressing the problem at hand.


Sign in / Sign up

Export Citation Format

Share Document