scholarly journals Semantic Predictive Coding with Arbitrated Generative Adversarial Networks

2020 ◽  
Vol 2 (3) ◽  
pp. 307-326
Author(s):  
Radamanthys Stivaktakis ◽  
Grigorios Tsagkatakis ◽  
Panagiotis Tsakalides

In spatio-temporal predictive coding problems, like next-frame prediction in video, determining the content of plausible future frames is primarily based on the image dynamics of previous frames. We establish an alternative approach based on their underlying semantic information when considering data that do not necessarily incorporate a temporal aspect, but instead they comply with some form of associative ordering. In this work, we introduce the notion of semantic predictive coding by proposing a novel generative adversarial modeling framework which incorporates the arbiter classifier as a new component. While the generator is primarily tasked with the anticipation of possible next frames, the arbiter’s principal role is the assessment of their credibility. Taking into account that the denotative meaning of each forthcoming element can be encapsulated in a generic label descriptive of its content, a classification loss is introduced along with the adversarial loss. As supported by our experimental findings in a next-digit and a next-letter scenario, the utilization of the arbiter not only results in an enhanced GAN performance, but it also broadens the network’s creative capabilities in terms of the diversity of the generated symbols.

2021 ◽  
Vol 12 (6) ◽  
pp. 1-20
Author(s):  
Fayaz Ali Dharejo ◽  
Farah Deeba ◽  
Yuanchun Zhou ◽  
Bhagwan Das ◽  
Munsif Ali Jatoi ◽  
...  

Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution. Recently, deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR) . However, the generated image still suffers from undesirable artifacts such as the absence of texture-feature representation and high-frequency information. We propose a frequency domain-based spatio-temporal remote sensing single image super-resolution technique to reconstruct the HR image combined with generative adversarial networks (GANs) on various frequency bands (TWIST-GAN). We have introduced a new method incorporating Wavelet Transform (WT) characteristics and transferred generative adversarial network. The LR image has been split into various frequency bands by using the WT, whereas the transfer generative adversarial network predicts high-frequency components via a proposed architecture. Finally, the inverse transfer of wavelets produces a reconstructed image with super-resolution. The model is first trained on an external DIV2 K dataset and validated with the UC Merced Landsat remote sensing dataset and Set14 with each image size of 256 × 256. Following that, transferred GANs are used to process spatio-temporal remote sensing images in order to minimize computation cost differences and improve texture information. The findings are compared qualitatively and qualitatively with the current state-of-art approaches. In addition, we saved about 43% of the GPU memory during training and accelerated the execution of our simplified version by eliminating batch normalization layers.


2020 ◽  
Vol 7 (4) ◽  
pp. 191569
Author(s):  
Edoardo Lisi ◽  
Mohammad Malekzadeh ◽  
Hamed Haddadi ◽  
F. Din-Houn Lau ◽  
Seth Flaxman

Conditional generative adversarial networks (CGANs) are a recent and popular method for generating samples from a probability distribution conditioned on latent information. The latent information often comes in the form of a discrete label from a small set. We propose a novel method for training CGANs which allows us to condition on a sequence of continuous latent distributions f (1) , …, f ( K ) . This training allows CGANs to generate samples from a sequence of distributions. We apply our method to paintings from a sequence of artistic movements, where each movement is considered to be its own distribution. Exploiting the temporal aspect of the data, a vector autoregressive (VAR) model is fitted to the means of the latent distributions that we learn, and used for one-step-ahead forecasting, to predict the latent distribution of a future art movement f ( K +1) . Realizations from this distribution can be used by the CGAN to generate ‘future’ paintings. In experiments, this novel methodology generates accurate predictions of the evolution of art. The training set consists of a large dataset of past paintings. While there is no agreement on exactly what current art period we find ourselves in, we test on plausible candidate sets of present art, and show that the mean distance to our predictions is small.


Author(s):  
Soomin Park ◽  
Deok-Kyeong Jang ◽  
Sung-Hee Lee

This paper presents a novel deep learning-based framework for translating a motion into various styles within multiple domains. Our framework is a single set of generative adversarial networks that learns stylistic features from a collection of unpaired motion clips with style labels to support mapping between multiple style domains. We construct a spatio-temporal graph to model a motion sequence and employ the spatial-temporal graph convolution networks (ST-GCN) to extract stylistic properties along spatial and temporal dimensions. Through spatial-temporal modeling, our framework shows improved style translation results between significantly different actions and on a long motion sequence containing multiple actions. In addition, we first develop a mapping network for motion stylization that maps a random noise to style, which allows for generating diverse stylization results without using reference motions. Through various experiments, we demonstrate the ability of our method to generate improved results in terms of visual quality, stylistic diversity, and content preservation.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Richa Sharma ◽  
Manoj Sharma ◽  
Ankit Shukla ◽  
Santanu Chaudhury

Generation of synthetic data is a challenging task. There are only a few significant works on RGB video generation and no pertinent works on RGB-D data generation. In the present work, we focus our attention on synthesizing RGB-D data which can further be used as dataset for various applications like object tracking, gesture recognition, and action recognition. This paper has put forward a proposal for a novel architecture that uses conditional deep 3D-convolutional generative adversarial networks to synthesize RGB-D data by exploiting 3D spatio-temporal convolutional framework. The proposed architecture can be used to generate virtually unlimited data. In this work, we have presented the architecture to generate RGB-D data conditioned on class labels. In the architecture, two parallel paths were used, one to generate RGB data and the second to synthesize depth map. The output from the two parallel paths is combined to generate RGB-D data. The proposed model is used for video generation at 30 fps (frames per second). The frame referred here is an RGB-D with the spatial resolution of 512 × 512.


2022 ◽  
Vol 13 (2) ◽  
pp. 1-23
Author(s):  
Divya Saxena ◽  
Jiannong Cao

Spatio-temporal (ST) data is a collection of multiple time series data with different spatial locations and is inherently stochastic and unpredictable. An accurate prediction over such data is an important building block for several urban applications, such as taxi demand prediction, traffic flow prediction, and so on. Existing deep learning based approaches assume that outcome is deterministic and there is only one plausible future; therefore, cannot capture the multimodal nature of future contents and dynamics. In addition, existing approaches learn spatial and temporal data separately as they assume weak correlation between them. To handle these issues, in this article, we propose a stochastic spatio-temporal generative model (named D-GAN) which adopts Generative Adversarial Networks (GANs)-based structure for more accurate ST prediction in multiple time steps. D-GAN consists of two components: (1) spatio-temporal correlation network which models spatio-temporal joint distribution of pixels and supports a stochastic sampling of latent variables for multiple plausible futures; (2) a stochastic adversarial network to jointly learn generation and variational inference of data through implicit distribution modeling. D-GAN also supports fusion of external factors through explicit objective to improve the model learning. Extensive experiments performed on two real-world datasets show that D-GAN achieves significant improvements and outperforms baseline models.


2021 ◽  
Vol 94 ◽  
pp. 116200
Author(s):  
Stavros Papadopoulos ◽  
Nikolaos Dimitriou ◽  
Anastasios Drosou ◽  
Dimitrios Tzovaras

Author(s):  
Depeng Xu ◽  
Yongkai Wu ◽  
Shuhan Yuan ◽  
Lu Zhang ◽  
Xintao Wu

Achieving fairness in learning models is currently an imperative task in machine learning. Meanwhile, recent research showed that fairness should be studied from the causal perspective, and proposed a number of fairness criteria based on Pearl's causal modeling framework. In this paper, we investigate the problem of building causal fairness-aware generative adversarial networks (CFGAN), which can learn a close distribution from a given dataset, while also ensuring various causal fairness criteria based on a given causal graph. CFGAN adopts two generators, whose structures are purposefully designed to reflect the structures of causal graph and interventional graph. Therefore, the two generators can respectively simulate the underlying causal model that generates the real data, as well as the causal model after the intervention. On the other hand, two discriminators are used for producing a close-to-real distribution, as well as for achieving various fairness criteria based on causal quantities simulated by generators. Experiments on a real-world dataset show that CFGAN can generate high quality fair data.


Sign in / Sign up

Export Citation Format

Share Document