scholarly journals A survey and taxonomy of adversarial neural networks for text‐to‐image synthesis

Author(s):  
Jorge Agnese ◽  
Jonathan Herrera ◽  
Haicheng Tao ◽  
Xingquan Zhu
IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Hyunhee Lee ◽  
Gyeongmin Kim ◽  
Yuna Hur ◽  
Heuiseok Lim

2020 ◽  
Vol 2020 (28) ◽  
pp. 175-180
Author(s):  
Hadas Shahar ◽  
Hagit Hel-Or

The field of image forgery is widely studied, and with the recent introduction of deep networks based image synthesis, detection of fake image sequences has increased the challenge. Specifically, detecting spoofing attacks is of grave importance. In this study we exploit the minute changes in facial color of human faces in videos to determine real from fake videos. Even when idle, human skin color changes with sub-dermal blood flow, these changes are enhanced under stress and emotion. We show that extracting facial color along a video sequence can serve as a feature for training deep neural networks to successfully determine fake vs real face sequences.


2018 ◽  
Vol 42 (1) ◽  
pp. 105-112 ◽  
Author(s):  
V. I. Shakhuro ◽  
A. S. Konushin

In this work, we research the applicability of generative adversarial neural networks for generating training samples for a traffic sign classification task. We consider generative neural networks trained using the Wasserstein metric. As a baseline method for comparison, we take image generation based on traffic sign icons. Experimental evaluation of the classifiers based on convolutional neural networks is conducted on real data, two types of synthetic data, and a combination of real and synthetic data. The experiments show that modern generative neural networks are capable of generating realistic training samples for traffic sign classification that outperform methods for generating images with icons, but are still slightly worse than real images for classifier training.


Sign in / Sign up

Export Citation Format

Share Document