scholarly journals Matching Thermal to Visible Face Images Using a Semantic-Guided Generative Adversarial Network

Author(s):  
Cunjian Chen ◽  
Arun Ross
2020 ◽  
Vol 34 (06) ◽  
pp. 10402-10409
Author(s):  
Tianying Wang ◽  
Wei Qi Toh ◽  
Hao Zhang ◽  
Xiuchao Sui ◽  
Shaohua Li ◽  
...  

Robotic drawing has become increasingly popular as an entertainment and interactive tool. In this paper we present RoboCoDraw, a real-time collaborative robot-based drawing system that draws stylized human face sketches interactively in front of human users, by using the Generative Adversarial Network (GAN)-based style transfer and a Random-Key Genetic Algorithm (RKGA)-based path optimization. The proposed RoboCoDraw system takes a real human face image as input, converts it to a stylized avatar, then draws it with a robotic arm. A core component in this system is the AvatarGAN proposed by us, which generates a cartoon avatar face image from a real human face. AvatarGAN is trained with unpaired face and avatar images only and can generate avatar images of much better likeness with human face images in comparison with the vanilla CycleGAN. After the avatar image is generated, it is fed to a line extraction algorithm and converted to sketches. An RKGA-based path optimization algorithm is applied to find a time-efficient robotic drawing path to be executed by the robotic arm. We demonstrate the capability of RoboCoDraw on various face images using a lightweight, safe collaborative robot UR5.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1810
Author(s):  
Dat Tien Nguyen ◽  
Tuyen Danh Pham ◽  
Ganbayar Batchuluun ◽  
Kyoung Jun Noh ◽  
Kang Ryoung Park

Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.


2020 ◽  
Vol 13 (6) ◽  
pp. 219-228
Author(s):  
Avin Maulana ◽  
◽  
Chastine Fatichah ◽  
Nanik Suciati ◽  
◽  
...  

Facial inpainting is a process to reconstruct some missing or damaged pixels in the facial image. The reconstructed pixels should still be realistic, so the observer could not differentiate between the reconstructed pixels and the original one. However, there are a few problems that may arise when the inpainting algorithm has been done. There was an inconsistency between adjacent pixels when done on an unaligned face image, which caused a failure to reconstruct. We propose an improvement method in facial inpainting using Generative Adversarial Network (GAN) with additional loss using pre-trained network VGG-Net and face landmark. The feature reconstruction loss will help to preserve deep-feature on an image, while the landmark will increase the result’s perceptual quality. The training process has been done using a curriculum learning scenario. Qualitative results show that our inpainting method can reconstruct the missing area on unaligned face images. From the quantitative results, our proposed method achieves the average score of 21.528 and 0.665, while the maximum score of 29.922 and 0.908 on PSNR (Peak Signal to Noise Ratio) and SSIM (Structure Similarity Index Measure) metrics, respectively.


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 603
Author(s):  
Quang T. M. Pham ◽  
Janghoon Yang ◽  
Jitae Shin

The performance of existing face age progression or regression methods is often limited by the lack of sufficient data to train the model. To deal with this problem, we introduce a novel framework that exploits synthesized images to improve the performance. A conditional generative adversarial network (GAN) is first developed to generate facial images with targeted ages. The semi-supervised GAN, called SS-FaceGAN, is proposed. This approach considers synthesized images with a target age and the face images from the real data so that age and identity features can be explicitly utilized in the objective function of the network. We analyze the performance of our method over previous studies qualitatively and quantitatively. The experimental results show that the SS-FaceGAN model can produce realistic human faces in terms of both identity preservation and age preservation with the quantitative results of a decent face detection rate of 97% and similarity score of 0.30 on average.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5229
Author(s):  
Ja Hyung Koo ◽  
Se Woon Cho ◽  
Na Rae Baek ◽  
Kang Ryoung Park

The long-distance recognition methods in indoor environments are commonly divided into two categories, namely face recognition and face and body recognition. Cameras are typically installed on ceilings for face recognition. Hence, it is difficult to obtain a front image of an individual. Therefore, in many studies, the face and body information of an individual are combined. However, the distance between the camera and an individual is closer in indoor environments than that in outdoor environments. Therefore, face information is distorted due to motion blur. Several studies have examined deblurring of face images. However, there is a paucity of studies on deblurring of body images. To tackle the blur problem, a recognition method is proposed wherein the blur of body and face images is restored using a generative adversarial network (GAN), and the features of face and body obtained using a deep convolutional neural network (CNN) are used to fuse the matching score. The database developed by us, Dongguk face and body dataset version 2 (DFB-DB2) and ChokePoint dataset, which is an open dataset, were used in this study. The equal error rate (EER) of human recognition in DFB-DB2 and ChokePoint dataset was 7.694% and 5.069%, respectively. The proposed method exhibited better results than the state-of-art methods.


2020 ◽  
Author(s):  
Thirza Dado ◽  
Yağmur Güçlütürk ◽  
Luca Ambrogioni ◽  
Gabriëlle Ras ◽  
Sander E. Bosch ◽  
...  

AbstractWe introduce a new framework for hyperrealistic reconstruction of perceived naturalistic stimuli from brain recordings. To this end, we embrace the use of generative adversarial networks (GANs) at the earliest step of our neural decoding pipeline by acquiring functional magnetic resonance imaging data as subjects perceived face images created by the generator network of a GAN. Subsequently, we used a decoding approach to predict the latent state of the GAN from brain data. Hence, latent representations for stimulus (re-)generation are obtained, leading to state-of-the-art image reconstructions. Altogether, we have developed a highly promising approach for decoding sensory perception from brain activity and systematically analyzing neural information processing in the human brain.DisclaimerThis manuscript contains no real face images; all faces are artificially generated by a generative adversarial network.


Author(s):  
Keke He ◽  
Yanwei Fu ◽  
Wuhao Zhang ◽  
Chengjie Wang ◽  
Yu-Gang Jiang ◽  
...  

Facial attribute recognition is an important and yet challenging research topic. Different from most previous approaches which predict attributes only based on the whole images, this paper leverages facial parts locations for better attribute prediction. A facial abstraction image which contains both local facial parts and facial texture information is introduced. This abstraction image is generated by a Generative Adversarial Network (GAN). Then we build a dual-path facial attribute recognition network to utilize features from the original face images and facial abstraction images. Empirically, the features of facial abstraction images are complementary to features of original face images. With the facial parts localized by the abstraction images, our method improves facial attributes recognition, especially the attributes located on small face regions. Extensive evaluations conducted on CelebA and LFWA benchmark datasets show that state-of-the-art performance is achieved.


Sign in / Sign up

Export Citation Format

Share Document