scholarly journals Criminal Face Recognition Using GAN

Author(s):  
Anitta George ◽  
Krishnendu K A ◽  
Anusree K ◽  
Adira Suresh Nair ◽  
Hari Shree

Forensics and security at present often use low technological resources. Security measures often fail to update with the upcoming technology. This project is based on implementing an automatic face recognition of criminals or specific targets using machine-learning approach. Given a set of features to a Generative Adversarial Network(GAN), the algorithm generates an image of the target with the specified feature set. The input to the machine can either be a given set of features or a set of portraits varying from frontals to side profiles from which these features can be extracted. The accuracy of the system is directly proportional to the number of epochs trained in the network. The generated output image can vary from primitive, low resolution images to high quality images where features are more recognizable. This is then compared with a predefined database of existing people. Thus, the target can immediately be recognized with the generation of an artificial image with the given biometric feature set, which will be again compared by a discriminator network to check the true identity of the target.

Author(s):  
Ruoqi Sun ◽  
Chen Huang ◽  
Hengliang Zhu ◽  
Lizhuang Ma

AbstractThe technique of facial attribute manipulation has found increasing application, but it remains challenging to restrict editing of attributes so that a face’s unique details are preserved. In this paper, we introduce our method, which we call a mask-adversarial autoencoder (M-AAE). It combines a variational autoencoder (VAE) and a generative adversarial network (GAN) for photorealistic image generation. We use partial dilated layers to modify a few pixels in the feature maps of an encoder, changing the attribute strength continuously without hindering global information. Our training objectives for the VAE and GAN are reinforced by supervision of face recognition loss and cycle consistency loss, to faithfully preserve facial details. Moreover, we generate facial masks to enforce background consistency, which allows our training to focus on the foreground face rather than the background. Experimental results demonstrate that our method can generate high-quality images with varying attributes, and outperforms existing methods in detail preservation.


2021 ◽  
Vol 9 (7) ◽  
pp. 691
Author(s):  
Kai Hu ◽  
Yanwen Zhang ◽  
Chenghang Weng ◽  
Pengsheng Wang ◽  
Zhiliang Deng ◽  
...  

When underwater vehicles work, underwater images are often absorbed by light and scattered and diffused by floating objects, which leads to the degradation of underwater images. The generative adversarial network (GAN) is widely used in underwater image enhancement tasks because it can complete image-style conversions with high efficiency and high quality. Although the GAN converts low-quality underwater images into high-quality underwater images (truth images), the dataset of truth images also affects high-quality underwater images. However, an underwater truth image lacks underwater image enhancement, which leads to a poor effect of the generated image. Thus, this paper proposes to add the natural image quality evaluation (NIQE) index to the GAN to provide generated images with higher contrast and make them more in line with the perception of the human eye, and at the same time, grant generated images a better effect than the truth images set by the existing dataset. In this paper, several groups of experiments are compared, and through the subjective evaluation and objective evaluation indicators, it is verified that the enhanced image of this algorithm is better than the truth image set by the existing dataset.


Author(s):  
Amey Thakur ◽  
Hasan Rizvi ◽  
Mega Satish

In the present study, we propose to implement a new framework for estimating generative models via an adversarial process to extend an existing GAN framework and develop a white-box controllable image cartoonization, which can generate high-quality cartooned images/videos from real-world photos and videos. The learning purposes of our system are based on three distinct representations: surface representation, structure representation, and texture representation. The surface representation refers to the smooth surface of the images. The structure representation relates to the sparse colour blocks and compresses generic content. The texture representation shows the texture, curves, and features in cartoon images. Generative Adversarial Network (GAN) framework decomposes the images into different representations and learns from them to generate cartoon images. This decomposition makes the framework more controllable and flexible which allows users to make changes based on the required output. This approach overcomes any previous system in terms of maintaining clarity, colours, textures, shapes of images yet showing the characteristics of cartoon images.


2020 ◽  
Vol 40 (19) ◽  
pp. 1910002
Author(s):  
徐志京 Xu Zhijing ◽  
王东 Wang Dong

2020 ◽  
Vol 94 ◽  
pp. 103861 ◽  
Author(s):  
Seyed Mehdi Iranmanesh ◽  
Benjamin Riggan ◽  
Shuowen Hu ◽  
Nasser M. Nasrabadi

Sign in / Sign up

Export Citation Format

Share Document