scholarly journals Visual Thinking of Neural Networks: Interactive Text to Image Synthesis

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Hyunhee Lee ◽  
Gyeongmin Kim ◽  
Yuna Hur ◽  
Heuiseok Lim
2020 ◽  
Vol 2020 (28) ◽  
pp. 175-180
Author(s):  
Hadas Shahar ◽  
Hagit Hel-Or

The field of image forgery is widely studied, and with the recent introduction of deep networks based image synthesis, detection of fake image sequences has increased the challenge. Specifically, detecting spoofing attacks is of grave importance. In this study we exploit the minute changes in facial color of human faces in videos to determine real from fake videos. Even when idle, human skin color changes with sub-dermal blood flow, these changes are enhanced under stress and emotion. We show that extracting facial color along a video sequence can serve as a feature for training deep neural networks to successfully determine fake vs real face sequences.


Author(s):  
Jorge Agnese ◽  
Jonathan Herrera ◽  
Haicheng Tao ◽  
Xingquan Zhu

2018 ◽  
Vol 42 (1) ◽  
pp. 105-112 ◽  
Author(s):  
V. I. Shakhuro ◽  
A. S. Konushin

In this work, we research the applicability of generative adversarial neural networks for generating training samples for a traffic sign classification task. We consider generative neural networks trained using the Wasserstein metric. As a baseline method for comparison, we take image generation based on traffic sign icons. Experimental evaluation of the classifiers based on convolutional neural networks is conducted on real data, two types of synthetic data, and a combination of real and synthetic data. The experiments show that modern generative neural networks are capable of generating realistic training samples for traffic sign classification that outperform methods for generating images with icons, but are still slightly worse than real images for classifier training.


2008 ◽  
Vol 29 (4) ◽  
pp. 181-188 ◽  
Author(s):  
John Allbutt ◽  
Jonathan Ling ◽  
Thomas M. Heffernan ◽  
Mohammed Shafiullah

Allbutt, Ling, and Shafiullah (2006) and Allbutt, Shafiullah, and Ling (2006) found that scores on self-report measures of visual imagery experience correlate primarily with the egoistic form of social-desirable responding. Here, three studies are reported which investigated whether this pattern of findings generalized to the ratings of imagery vividness in the auditory modality, a new version of the Vividness of Visual Imagery Questionnaire ( Marks, 1995 ), and reports of visual thinking style. The measure of social-desirable responding used was the Balanced Inventory of Desirable Responding (BIDR; Paulhus, 2002 ). Correlational analysis replicated the pattern seen in our earlier work and of the correlations with the egoistic bias, the correlation with vividness of visual imagery was largest and significant, the correlation with visual thinking style next largest and approached significance, and the correlation with vividness of auditory imagery was the smallest and not significant. The size of these correlations mirrored the extent to which the three aspects of imagery were valued by participants.


Sign in / Sign up

Export Citation Format

Share Document