scholarly journals InvNet: Encoding Geometric and Statistical Invariances in Deep Generative Models

2020 ◽  
Vol 34 (04) ◽  
pp. 4377-4384
Author(s):  
Ameya Joshi ◽  
Minsu Cho ◽  
Viraj Shah ◽  
Balaji Pokuri ◽  
Soumik Sarkar ◽  
...  

Generative Adversarial Networks (GANs), while widely successful in modeling complex data distributions, have not yet been sufficiently leveraged in scientific computing and design. Reasons for this include the lack of flexibility of GANs to represent discrete-valued image data, as well as the lack of control over physical properties of generated samples. We propose a new conditional generative modeling approach (InvNet) that efficiently enables modeling discrete-valued images, while allowing control over their parameterized geometric and statistical properties. We evaluate our approach on several synthetic and real world problems: navigating manifolds of geometric shapes with desired sizes; generation of binary two-phase materials; and the (challenging) problem of generating multi-orientation polycrystalline microstructures.

2019 ◽  
Author(s):  
Emanuel Silva ◽  
Johannes Lochter

The anomaly detection task is a well know problem being researched among a variety of areas, including machine learning. The task is to identify data patterns that have a non expected behaviour, that can be a malicious data sent by an attacker or a unexpected valid behaviour, in both cases the anomaly need to be identified. With the advance of deep learning based techniques showing that this class of algorithms can learn high-dimensional and complex data patterns, naturally it became an option to the anomaly detection task. Recent researches in literature are using a sub-field of deep learning algorithms named Generative Adversarial Networks for predicting anomalous samples, since the original method can learn the data distribution. These new techniques make some changes for the anomaly detection task, and this work provides a briefly review on these methods and provides a comparison with well known methods.


2019 ◽  
Vol 1 ◽  
pp. 1-8
Author(s):  
Amgad Agoub ◽  
Yevgeniya Filippovska ◽  
Valentina Schmidt ◽  
Martin Kada

<p><strong>Abstract.</strong> The abundance of high-quality satellite images is salutary for many activities but raises also privacy and security concerns. Manually obfuscating areas subject to privacy issues by applying locally pixelization techniques leads to undesirable discontinuities in the visual appearance of the depicted scenes. Alternatively, automatically generated photorealistic fillers can be used to obfuscate sensitive information while preserving the original visual aspect of high-resolution aerial images.</p><p>Recent advances in the field of Deep Learning (DL) enable to synthesize high-quality image data. Particularly, generative models such as Generative Adversarial Networks (GANs) can be used to produce images that can be perceived as photorealistic even by human examiners. Additionally, Conditional Generative Adversarial Networks (cGANs) allow control over the image generation process and results. These developments give the opportunity to generate photorealistic fillers for the purpose of privacy and security in image data used within city models while preserving the quality of the original data. However, according to our knowledge, little research has been done to explore this potential. In order to close this gap, we propose a novel framework that is designed to correspond to the mentioned end goal and produces promising results.</p>


Author(s):  
Naoya Takeishi ◽  
Yoshinobu Kawahara

Prior domain knowledge can greatly help to learn generative models. However, it is often too costly to hard-code prior knowledge as a specific model architecture, so we often have to use general-purpose models. In this paper, we propose a method to incorporate prior knowledge of feature relations into the learning of general-purpose generative models. To this end, we formulate a regularizer that makes the marginals of a generative model to follow prescribed relative dependence of features. It can be incorporated into off-the-shelf learning methods of many generative models, including variational autoencoders and generative adversarial networks, as its gradients can be computed using standard backpropagation techniques. We show the effectiveness of the proposed method with experiments on multiple types of datasets and generative models.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-42
Author(s):  
Divya Saxena ◽  
Jiannong Cao

Generative Adversarial Networks (GANs) is a novel class of deep generative models that has recently gained significant attention. GANs learn complex and high-dimensional distributions implicitly over images, audio, and data. However, there exist major challenges in training of GANs, i.e., mode collapse, non-convergence, and instability, due to inappropriate design of network architectre, use of objective function, and selection of optimization algorithm. Recently, to address these challenges, several solutions for better design and optimization of GANs have been investigated based on techniques of re-engineered network architectures, new objective functions, and alternative optimization algorithms. To the best of our knowledge, there is no existing survey that has particularly focused on the broad and systematic developments of these solutions. In this study, we perform a comprehensive survey of the advancements in GANs design and optimization solutions proposed to handle GANs challenges. We first identify key research issues within each design and optimization technique and then propose a new taxonomy to structure solutions by key research issues. In accordance with the taxonomy, we provide a detailed discussion on different GANs variants proposed within each solution and their relationships. Finally, based on the insights gained, we present promising research directions in this rapidly growing field.


2021 ◽  
Vol 251 ◽  
pp. 03055
Author(s):  
John Blue ◽  
Braden Kronheim ◽  
Michelle Kuchera ◽  
Raghuram Ramanujan

Detector simulation in high energy physics experiments is a key yet computationally expensive step in the event simulation process. There has been much recent interest in using deep generative models as a faster alternative to the full Monte Carlo simulation process in situations in which the utmost accuracy is not necessary. In this work we investigate the use of conditional Wasserstein Generative Adversarial Networks to simulate both hadronization and the detector response to jets. Our model takes the 4-momenta of jets formed from partons post-showering and pre-hadronization as inputs and predicts the 4-momenta of the corresponding reconstructed jet. Our model is trained on fully simulated tt events using the publicly available GEANT-based simulation of the CMS Collaboration. We demonstrate that the model produces accurate conditional reconstructed jet transverse momentum (pT) distributions over a wide range of pT for the input parton jet. Our model takes only a fraction of the time necessary for conventional detector simulation methods, running on a CPU in less than a millisecond per event.


Author(s):  
Trung Le ◽  
Quan Hoang ◽  
Hung Vu ◽  
Tu Dinh Nguyen ◽  
Hung Bui ◽  
...  

Generative Adversarial Networks (GANs) are a powerful class of deep generative models. In this paper, we extend GAN to the problem of generating data that are not only close to a primary data source but also required to be different from auxiliary data sources. For this problem, we enrich both GANs' formulations and applications by introducing pushing forces that thrust generated samples away from given auxiliary data sources. We term our method Push-and-Pull GAN (P2GAN). We conduct extensive experiments to demonstrate the merit of P2GAN in two applications: generating data with constraints and addressing the mode collapsing problem. We use CIFAR-10, STL-10, and ImageNet datasets and compute Fréchet Inception Distance to evaluate P2GAN's effectiveness in addressing the mode collapsing problem. The results show that P2GAN outperforms the state-of-the-art baselines. For the problem of generating data with constraints, we show that P2GAN can successfully avoid generating specific features such as black hair.


2021 ◽  
Vol 4 ◽  
Author(s):  
Nathanaël Perraudin ◽  
Sandro Marcon ◽  
Aurelien Lucchi ◽  
Tomasz Kacprzak

Weak gravitational lensing mass maps play a crucial role in understanding the evolution of structures in the Universe and our ability to constrain cosmological models. The prediction of these mass maps is based on expensive N-body simulations, which can create a computational bottleneck for cosmological analyses. Simulation-based emulators of map summary statistics, such as the matter power spectrum and its covariance, are starting to play increasingly important role, as the analytical predictions are expected to reach their precision limits for upcoming experiments. Creating an emulator of the cosmological mass maps themselves, rather than their summary statistics, is a more challenging task. Modern deep generative models, such as Generative Adversarial Networks (GAN), have demonstrated their potential to achieve this goal. Most existing GAN approaches produce simulations for a fixed value of the cosmological parameters, which limits their practical applicability. We propose a novel conditional GAN model that is able to generate mass maps for any pair of matter density Ωm and matter clustering strength σ8, parameters which have the largest impact on the evolution of structures in the Universe, for a given source galaxy redshift distribution n(z). Our results show that our conditional GAN can interpolate efficiently within the space of simulated cosmologies, and generate maps anywhere inside this space with good visual quality high statistical accuracy. We perform an extensive quantitative comparison of the N-body and GAN -generated maps using a range of metrics: the pixel histograms, peak counts, power spectra, bispectra, Minkowski functionals, correlation matrices of the power spectra, the Multi-Scale Structural Similarity Index (MS-SSIM) and our equivalent of the Fréchet Inception Distance. We find a very good agreement on these metrics, with typical differences are &lt;5% at the center of the simulation grid, and slightly worse for cosmologies at the grid edges. The agreement for the bispectrum is slightly worse, on the &lt;20% level. This contribution is a step toward building emulators of mass maps directly, capturing both the cosmological signal and its variability. We make the code1 and the data2 publicly available.


2021 ◽  
Vol 8 (1) ◽  
pp. 3-31
Author(s):  
Yuan Xue ◽  
Yuan-Chen Guo ◽  
Han Zhang ◽  
Tao Xu ◽  
Song-Hai Zhang ◽  
...  

AbstractIn many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.


Sign in / Sign up

Export Citation Format

Share Document