scholarly journals TVA-GAN: Attention Guided Generative Adversarial Network For Thermal To Visible Face Transformations

Author(s):  
NAND YADAV ◽  
Satish Kumar Singh ◽  
Shiv Ram Dubey

In the recent advancement of machine learning methods for realistic image generation and image translation, Generative Adversarial Networks (GANs) play a vital role. GAN generates novel samples that look indistinguishable from the real images. The image translation using a generative adversarial network refers to unsupervised learning. In this paper, we translate the thermal images into visible images. Thermal to Visible image translation is challenging due to the non-availability of accurate semantic information and smooth textures. The thermal images contain only single-channel, holding only the images’ luminance with less feature. We develop a new Cyclic Attention-based Generative Adversarial Network for Thermal to Visible Face transformation (TVA-GAN) by incorporating a new attention-based network. We use attention guidance with a recurrent block through an Inception module to reduce the learning space towards the optimum solution.

2021 ◽  
Author(s):  
NAND YADAV

In the recent advancement of machine learning methods for realistic image generation and image translation, Generative Adversarial Networks (GANs) play a vital role. GAN generates novel samples that look indistinguishable from the real images. The image translation using a generative adversarial network refers to unsupervised learning. In this paper, we translate the thermal images into visible images. Thermal to Visible image translation is challenging due to the non-availability of accurate semantic information and smooth textures. The thermal images contain only single-channel, holding only the images’ luminance with less feature. We develop a new Cyclic Attention-based Generative Adversarial Network for Thermal to Visible Face transformation (TVA-GAN) by incorporating a new attention-based network. We use attention guidance with a recurrent block through an Inception module to reduce the learning space towards the optimum solution.


2021 ◽  
Author(s):  
NAND YADAV

In the recent advancement of machine learning methods for realistic image generation and image translation, Generative Adversarial Networks (GANs) play a vital role. GAN generates novel samples that look indistinguishable from the real images. The image translation using a generative adversarial network refers to unsupervised learning. In this paper, we translate the thermal images into visible images. Thermal to Visible image translation is challenging due to the non-availability of accurate semantic information and smooth textures. The thermal images contain only single-channel, holding only the images’ luminance with less feature. We develop a new Cyclic Attention-based Generative Adversarial Network for Thermal to Visible Face transformation (TVA-GAN) by incorporating a new attention-based network. We use attention guidance with a recurrent block through an Inception module to reduce the learning space towards the optimum solution.


Generative Adversarial Networks have gained prominence in a short span of time as they can synthesize images from latent noise by minimizing the adversarial cost function. New variants of GANs have been developed to perform specific tasks using state-of-the-art GAN models, like image translation, single image super resolution, segmentation, classification, style transfer etc. However, a combination of two GANs to perform two different applications in one model has been sparsely explored. Hence, this paper concatenates two GANs and aims to perform Image Translation using Cycle GAN model on bird images and improve their resolution using SRGAN. During the extensive survey, it is observed that most of the deep learning databases on Aves were built using the new world species (i.e. species found in North America). Hence, to bridge this gap, a new Ave database, 'Common Birds of North - Western India' (CBNWI-50), is also proposed in this work.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3119 ◽  
Author(s):  
Jingtao Li ◽  
Zhanlong Chen ◽  
Xiaozhen Zhao ◽  
Lijia Shao

In recent years, the generative adversarial network (GAN)-based image translation model has achieved great success in image synthesis, image inpainting, image super-resolution, and other tasks. However, the images generated by these models often have problems such as insufficient details and low quality. Especially for the task of map generation, the generated electronic map cannot achieve effects comparable to industrial production in terms of accuracy and aesthetics. This paper proposes a model called Map Generative Adversarial Networks (MapGAN) for generating multitype electronic maps accurately and quickly based on both remote sensing images and render matrices. MapGAN improves the generator architecture of Pix2pixHD and adds a classifier to enhance the model, enabling it to learn the characteristics and style differences of different types of maps. Using the datasets of Google Maps, Baidu maps, and Map World maps, we compare MapGAN with some recent image translation models in the fields of one-to-one map generation and one-to-many domain map generation. The results show that the quality of the electronic maps generated by MapGAN is optimal in terms of both intuitive vision and classic evaluation indicators.


2021 ◽  
pp. 1-10
Author(s):  
Lei Chen ◽  
Jun Han ◽  
Feng Tian

Fusing the infrared (IR) and visible images has many advantages and can be applied to applications such as target detection and recognition. Colors can give more accurate and distinct features, but the low resolution and low contrast of fused images make this a challenge task. In this paper, we proposed a method based on parallel generative adversarial networks (GANs) to address the challenge. We used IR image, visible image and fusion image as ground truth of ‘L’, ‘a’ and ‘b’ of the Lab model. Through the parallel GANs, we can gain the Lab data which can be converted to RGB image. We adopt TNO and RoadScene data sets to verify our method, and compare with five objective evaluation parameters obtained by other three methods based on deep learning (DL). It is demonstrated that the proposed approach is able to achieve better performance against state-of-arts methods.


Author(s):  
Tao Zhang ◽  
Long Yu ◽  
Shengwei Tian

In this paper, we presents an apporch for real-world human face close-up images cartoonization. We use generative adversarial network combined with an attention mechanism to convert real-world face pictures and cartoon-style images as unpaired data sets. At present, the image-to-image translation model has been able to successfully transfer style and content. However, some problems still exist in the task of cartoonizing human faces:Hunman face has many details, and the content of the image is easy to lose details after the image is translated. the quality of the image generated by the model is defective. The model in this paper uses the generative adversarial network combined with the attention mechanism, and proposes a new generative adversarial network combined with the attention mechanism to deal with these problems. The channel attention mechanism is embedded between the upper and lower sampling layers of the generator network, to avoid increasing the complexity of the model while conveying the complete details of the underlying information. After comparing the experimental results of FID, PSNR, MSE three indicators and the size of the model parameters, the new model network proposed in this paper avoids the complexity of the model while achieving a good balance in the conversion task of style and content.


Author(s):  
Qiuqiang Kong ◽  
Yong Xu ◽  
Philip J. B. Jackson ◽  
Wenwu Wang ◽  
Mark D. Plumbley

Single-channel signal separation and deconvolution aims to separate and deconvolve individual sources from a single-channel mixture. Single-channel signal separation and deconvolution is a challenging problem in which no prior knowledge of the mixing filters is available. Both individual sources and mixing filters need to be estimated. In addition, a mixture may contain non-stationary noise which is unseen in the training set. We propose a synthesizing-decomposition (S-D) approach to solve the single-channel separation and deconvolution problem. In synthesizing, a generative model for sources is built using a generative adversarial network (GAN). In decomposition, both mixing filters and sources are optimized to minimize the reconstruction error of the mixture. The proposed S-D approach achieves a peak-to-noise-ratio (PSNR) of 18.9 dB and 15.4 dB in image inpainting and completion, outperforming a baseline convolutional neural network PSNR of 15.3 dB and 12.2 dB, respectively and achieves a PSNR of 13.2 dB in source separation together with deconvolution, outperforming a convolutive non-negative matrix factorization (NMF) baseline of 10.1 dB. 


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


Sign in / Sign up

Export Citation Format

Share Document