facial images
Recently Published Documents


TOTAL DOCUMENTS

912
(FIVE YEARS 306)

H-INDEX

35
(FIVE YEARS 6)

2022 ◽  
Vol 12 (2) ◽  
pp. 824
Author(s):  
Kamran Javed ◽  
Nizam Ud Din ◽  
Ghulam Hussain ◽  
Tahir Farooq

Face photographs taken on a bright sunny day or in floodlight contain unnecessary shadows of objects on the face. Most previous works deal with removing shadow from scene images and struggle with doing so for facial images. Faces have a complex semantic structure, due to which shadow removal is challenging. The aim of this research is to remove the shadow of an object in facial images. We propose a novel generative adversarial network (GAN) based image-to-image translation approach for shadow removal in face images. The first stage of our model automatically produces a binary segmentation mask for the shadow region. Then, the second stage, which is a GAN-based network, removes the object shadow and synthesizes the effected region. The generator network of our GAN has two parallel encoders—one is standard convolution path and the other is a partial convolution. We find that this combination in the generator results not only in learning an incorporated semantic structure but also in disentangling visual discrepancies problems under the shadow area. In addition to GAN loss, we exploit low level L1, structural level SSIM and perceptual loss from a pre-trained loss network for better texture and perceptual quality, respectively. Since there is no paired dataset for the shadow removal problem, we created a synthetic shadow dataset for training our network in a supervised manner. The proposed approach effectively removes shadows from real and synthetic test samples, while retaining complex facial semantics. Experimental evaluations consistently show the advantages of the proposed method over several representative state-of-the-art approaches.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


2021 ◽  
Vol 5 (6) ◽  
pp. 1036-1043
Author(s):  
Ardi wijaya ◽  
Puji Rahayu ◽  
Rozali Toyib

Problems in image processing to obtain the best smile are strongly influenced by the quality, background, position, and lighting, so it is very necessary to have an analysis by utilizing existing image processing algorithms to get a system that can make the best smile selection, then the Shi-Tomasi Algorithm is used. the algorithm that is commonly used to detect the corners of the smile region in facial images. The Shi-Tomasi angle calculation processes the image effectively from a target image in the edge detection ballistic test, then a corner point check is carried out on the estimation of translational parameters with a recreation test on the translational component to identify the cause of damage to the image, it is necessary to find the edge points to identify objects with remove noise in the image. The results of the test with the shi-Tomasi algorithm were used to detect a good smile from 20 samples of human facial images with each sample having 5 different smile images, with test data totaling 100 smile images, the success of the Shi-Tomasi Algorithm in detecting a good smile reached an accuracy value of 95% using the Confusion Matrix, Precision, Recall and Accuracy Methods.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 4
Author(s):  
Mobeen Ahmad ◽  
Usman Cheema ◽  
Muhammad Abdullah ◽  
Seungbin Moon ◽  
Dongil Han

Applications for facial recognition have eased the process of personal identification. However, there are increasing concerns about the performance of these systems against the challenges of presentation attacks, spoofing, and disguises. One of the reasons for the lack of a robustness of facial recognition algorithms in these challenges is the limited amount of suitable training data. This lack of training data can be addressed by creating a database with the subjects having several disguises, but this is an expensive process. Another approach is to use generative adversarial networks to synthesize facial images with the required disguise add-ons. In this paper, we present a synthetic disguised face database for the training and evaluation of robust facial recognition algorithms. Furthermore, we present a methodology for generating synthetic facial images for the desired disguise add-ons. Cycle-consistency loss is used to generate facial images with disguises, e.g., fake beards, makeup, and glasses, from normal face images. Additionally, an automated filtering scheme is presented for automated data filtering from the synthesized faces. Finally, facial recognition experiments are performed on the proposed synthetic data to show the efficacy of the proposed methodology and the presented database. Training on the proposed database achieves an improvement in the rank-1 recognition rate (68.3%), over a model trained on the original nondisguised face images.


2021 ◽  
pp. 2100182
Author(s):  
Sang-In Bae ◽  
Sangyeon Lee ◽  
Jae-Myeong Kwon ◽  
Hyun-Kyung Kim ◽  
Kyung-Won Jang ◽  
...  

2021 ◽  
Author(s):  
M. Ibsen ◽  
L. J. Gonzalez-Soler ◽  
C. Rathgeb ◽  
P. Drozdowski ◽  
M. Gomez-Barrero ◽  
...  

2021 ◽  
Vol 8 (2) ◽  
pp. 225-237
Author(s):  
Yanlong Tang ◽  
Yun Zhang ◽  
Xiaoguang Han ◽  
Fang-Lue Zhang ◽  
Yu-Kun Lai ◽  
...  

AbstractThere is a steadily growing range of applications that can benefit from facial reconstruction techniques, leading to an increasing demand for reconstruction of high-quality 3D face models. While it is an important expressive part of the human face, the nose has received less attention than other expressive regions in the face reconstruction literature. When applying existing reconstruction methods to facial images, the reconstructed nose models are often inconsistent with the desired shape and expression. In this paper, we propose a coarse-to-fine 3D nose reconstruction and correction pipeline to build a nose model from a single image, where 3D and 2D nose curve correspondences are adaptively updated and refined. We first correct the reconstruction result coarsely using constraints of 3D-2D sparse landmark correspondences, and then heuristically update a dense 3D-2D curve correspondence based on the coarsely corrected result. A final refinement step is performed to correct the shape based on the updated 3D-2D dense curve constraints. Experimental results show the advantages of our method for 3D nose reconstruction over existing methods.


2021 ◽  
Vol 7 ◽  
pp. e735
Author(s):  
Nermeen Nader ◽  
Fatma El-Zahraa El-Gamal ◽  
Shaker El-Sappagh ◽  
Kyung Sup Kwak ◽  
Mohammed Elmogy

Background and Objectives Kinship verification and recognition (KVR) is the machine’s ability to identify the genetic and blood relationship and its degree between humans’ facial images. The face is used because it is one of the most significant ways to recognize each other. Automatic KVR is an interesting area for investigation. It greatly affects real-world applications, such as searching for lost family members, forensics, and historical and genealogical studies. This paper presents a comprehensive survey that describes KVR applications and kinship types. It presents a literature review of current studies starting from handcrafted passing through shallow metric learning and ending with deep learning feature-based techniques. Furthermore, kinship mostly used datasets are discussed that in turn open the way for future directions for the research in this field. Also, the KVR limitations are discussed, such as insufficient illumination, noise, occlusion, and age variations problems. Finally, future research directions are presented, such as age and gender variation problems. Methods We applied a literature survey methodology to retrieve data from academic databases. An inclusion and exclusion criteria were set. Three stages were followed to select articles. Finally, the main KVR stages, along with the main methods in each stage, were presented. We believe that surveys can help researchers easily to detect areas that require more development and investigation. Results It was found that handcrafted, metric learning, and deep learning were widely utilized in kinship verification and recognition problem using facial images. Conclusions Despite the scientific efforts that aim to address this hot research topic, many future research areas require investigation, such as age and gender variation. In the end, the presented survey makes it easier for researchers to identify the new areas that require more investigation and research.


Author(s):  
Bambang Krismono Triwijoyo ◽  
Ahmat Adil ◽  
Anthony Anggrawan

Emotion recognition through facial images is one of the most challenging topics in human psychological interactions with machines. Along with advances in robotics, computer graphics, and computer vision, research on facial expression recognition is an important part of intelligent systems technology for interactive human interfaces where each person may have different emotional expressions, making it difficult to classify facial expressions and requires training data. large, so the deep learning approach is an alternative solution., The purpose of this study is to propose a different Convolutional Neural Network (CNN) model architecture with batch normalization consisting of three layers of multiple convolution layers with a simpler architectural model for the recognition of emotional expressions based on human facial images in the FER2013 dataset from Kaggle. The experimental results show that the training accuracy level reaches 98%, but there is still overfitting where the validation accuracy level is still 62%. The proposed model has better performance than the model without using batch normalization.


Sign in / Sign up

Export Citation Format

Share Document