scholarly journals Fast Flow Reconstruction via Robust Invertible n×n Convolution

2021 ◽  
Vol 13 (7) ◽  
pp. 179
Author(s):  
Thanh-Dat Truong ◽  
Chi Nhan Duong ◽  
Minh-Triet Tran ◽  
Ngan Le ◽  
Khoa Luu

Flow-based generative models have recently become one of the most efficient approaches to model data generation. Indeed, they are constructed with a sequence of invertible and tractable transformations. Glow first introduced a simple type of generative flow using an invertible 1×1 convolution. However, the 1×1 convolution suffers from limited flexibility compared to the standard convolutions. In this paper, we propose a novel invertible n×n convolution approach that overcomes the limitations of the invertible 1×1 convolution. In addition, our proposed network is not only tractable and invertible but also uses fewer parameters than standard convolutions. The experiments on CIFAR-10, ImageNet and Celeb-HQ datasets, have shown that our invertible n×n convolution helps to improve the performance of generative models significantly.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Hengshi Yu ◽  
Joshua D. Welch

AbstractDeep generative models such as variational autoencoders (VAEs) and generative adversarial networks (GANs) generate and manipulate high-dimensional images. We systematically assess the complementary strengths and weaknesses of these models on single-cell gene expression data. We also develop MichiGAN, a novel neural network that combines the strengths of VAEs and GANs to sample from disentangled representations without sacrificing data generation quality. We learn disentangled representations of three large single-cell RNA-seq datasets and use MichiGAN to sample from these representations. MichiGAN allows us to manipulate semantically distinct aspects of cellular identity and predict single-cell gene expression response to drug treatment.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 747
Author(s):  
Ioannis Gatopoulos ◽  
Jakub M. Tomczak

Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete transformations of data. This class of models allows both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where the transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA).


Author(s):  
Chaudhary Sarimurrab, Ankita Kesari Naman and Sudha Narang

The Generative Models have gained considerable attention in the field of unsupervised learning via a new and practical framework called Generative Adversarial Networks (GAN) due to its outstanding data generation capability. Many models of GAN have proposed, and several practical applications emerged in various domains of computer vision and machine learning. Despite GAN's excellent success, there are still obstacles to stable training. In this model, we aim to generate human faces through un-labelled data via the help of Deep Convolutional Generative Adversarial Networks. The applications for generating faces are vast in the field of image processing, entertainment, and other such industries. Our resulting model is successfully able to generate human faces from the given un-labelled data and random noise.


2021 ◽  
Vol 118 (15) ◽  
pp. e2101344118
Author(s):  
Qiao Liu ◽  
Jiaze Xu ◽  
Rui Jiang ◽  
Wing Hung Wong

Density estimation is one of the fundamental problems in both statistics and machine learning. In this study, we propose Roundtrip, a computational framework for general-purpose density estimation based on deep generative neural networks. Roundtrip retains the generative power of deep generative models, such as generative adversarial networks (GANs) while it also provides estimates of density values, thus supporting both data generation and density estimation. Unlike previous neural density estimators that put stringent conditions on the transformation from the latent space to the data space, Roundtrip enables the use of much more general mappings where target density is modeled by learning a manifold induced from a base density (e.g., Gaussian distribution). Roundtrip provides a statistical framework for GAN models where an explicit evaluation of density values is feasible. In numerical experiments, Roundtrip exceeds state-of-the-art performance in a diverse range of density estimation tasks.


2020 ◽  
Vol 2020 (10) ◽  
pp. 312-1-312-7
Author(s):  
Habib Ullah ◽  
Sultan Daud Khan ◽  
Mohib Ullah ◽  
Maqsood Mahmud ◽  
Faouzi Alaya Cheikh

Generative adversarial networks (GANs) have been significantly investigated in the past few years due to its outstanding data generation capacity. The extensive use of the GANs techniques is dominant in the field of computer vision, for example, plausible image generation, image to image translation, facial attribute manipulation, improving image resolution, and image to text translation. In spite of the significant success achieved in these domains, applying GANs to various other problems still presents important challenges. Several reviews and surveys for GANs are available in the literature. However, none of them present short but focused review about the most significant aspects of GANs. In this paper, we address these aspects. We analyze the basic theory of GANs and the differences among various generative models. Then, we discuss the recent spectrum of applications covered by the GANs. We also provide an insight into the challenges and future directions.


Crystals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 258
Author(s):  
Patrick Trampert ◽  
Dmitri Rubinstein ◽  
Faysal Boughorbel ◽  
Christian Schlinkmann ◽  
Maria Luschkova ◽  
...  

The analysis of microscopy images has always been an important yet time consuming process in materials science. Convolutional Neural Networks (CNNs) have been very successfully used for a number of tasks, such as image segmentation. However, training a CNN requires a large amount of hand annotated data, which can be a problem for material science data. We present a procedure to generate synthetic data based on ad hoc parametric data modelling for enhancing generalization of trained neural network models. Especially for situations where it is not possible to gather a lot of data, such an approach is beneficial and may enable to train a neural network reasonably. Furthermore, we show that targeted data generation by adaptively sampling the parameter space of the generative models gives superior results compared to generating random data points.


2021 ◽  
Author(s):  
Dan A. Rosa de Jesus ◽  
Paras Mandal ◽  
Tomonobu Senjyu ◽  
Sukumar Kamalasadan

2022 ◽  
Vol 54 (8) ◽  
pp. 1-49
Author(s):  
Abdul Jabbar ◽  
Xi Li ◽  
Bourahla Omar

The Generative Models have gained considerable attention in unsupervised learning via a new and practical framework called Generative Adversarial Networks (GAN) due to their outstanding data generation capability. Many GAN models have been proposed, and several practical applications have emerged in various domains of computer vision and machine learning. Despite GANs excellent success, there are still obstacles to stable training. The problems are Nash equilibrium, internal covariate shift, mode collapse, vanishing gradient, and lack of proper evaluation metrics. Therefore, stable training is a crucial issue in different applications for the success of GANs. Herein, we survey several training solutions proposed by different researchers to stabilize GAN training. We discuss (I) the original GAN model and its modified versions, (II) a detailed analysis of various GAN applications in different domains, and (III) a detailed study about the various GAN training obstacles as well as training solutions. Finally, we reveal several issues as well as research outlines to the topic.


2021 ◽  
Vol 70 (14) ◽  
pp. 1-10
Author(s):  
Sun Tai-Ping ◽  
◽  
Wu Yu-Chun ◽  
Guo Guo-Ping ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document