scholarly journals Gender recognition in the wild: a robustness evaluation over corrupted images

Author(s):  
Antonio Greco ◽  
Alessia Saggese ◽  
Mario Vento ◽  
Vincenzo Vigilante

AbstractIn the era of deep learning, the methods for gender recognition from face images achieve remarkable performance over most of the standard datasets. However, the common experimental analyses do not take into account that the face images given as input to the neural networks are often affected by strong corruptions not always represented in standard datasets. In this paper, we propose an experimental framework for gender recognition “in the wild”. We produce a corrupted version of the popular LFW+ and GENDER-FERET datasets, that we call LFW+C and GENDER-FERET-C, and evaluate the accuracy of nine different network architectures in presence of specific, suitably designed, corruptions; in addition, we perform an experiment on the MIVIA-Gender dataset, recorded in real environments, to analyze the effects of mixed image corruptions happening in the wild. The experimental analysis demonstrates that the robustness of the considered methods can be further improved, since all of them are affected by a performance drop on images collected in the wild or manually corrupted. Starting from the experimental results, we are able to provide useful insights for choosing the best currently available architecture in specific real conditions. The proposed experimental framework, whose code is publicly available, is general enough to be applicable also on different datasets; thus, it can act as a forerunner for future investigations.

Author(s):  
Fadhlan Hafizhelmi Kamaru Zaman ◽  
Juliana Johari ◽  
Ahmad Ihsan Mohd Yassin

<span>Face verification focuses on the task of determining whether two face images belong to the same identity or not. For unrestricted faces in the wild, this is a very challenging task. Besides significant degradation due to images that have large variations in pose, illumination, expression, aging, and occlusions, it also suffers from large-scale ever-expanding data needed to perform one-to-many recognition task. In this paper, we propose a face verification method by learning face similarities using a Convolutional Neural Networks (ConvNet). Instead of extracting features from each face image separately, our ConvNet model jointly extracts relational visual features from two face images in comparison. We train four hybrid ConvNet models to learn how to distinguish similarities between the face pair of four different face portions and join them at top-layer classifier level. We use binary-class classifier at top-layer level to identify the similarity of face pairs which includes a conventional Multi-Layer Perceptron (MLP), Support Vector Machines (SVM), Native Bayes, and another ConvNet. There are 3 face pairing configurations discussed in this paper. Results from experiments using Labeled face in the Wild (LFW) and CelebA datasets indicate that our hybrid ConvNet increases the face verification accuracy by as much as 27% when compared to individual ConvNet approach. We also found that Lateral face pair configuration yields the best LFW test accuracy on a very strict test protocol without any face alignment using MLP as top-layer classifier at 87.89%, which on-par with the state-of-the-arts. We showed that our approach is more flexible in terms of inferencing the learned models on out-of-sample data by testing LFW and CelebA on either model.</span>


Author(s):  
Sai Teja Challa ◽  
◽  
Sowjanya Jindam ◽  
Ruchitha Reddy Reddy ◽  
Kalathila Uthej ◽  
...  

Automatic age and gender prediction from face images has lately attracted much attention due to its wide range of applications in numerous facial analyses. We show in this study that utilizing the Caffe Model Architecture of Deep Learning Frame Work; we were able to greatly enhance age and gender recognition by learning representations using deep-convolutional neural networks (CNN). We propose a much simpler convolutional net architecture that can be employed even if no learning data is available. In a recent study presenting a potential benchmark for age and gender estimation, we show that our strategy greatly outperforms existing state-of-the-art methods.


Face recognition plays a vital role in security purpose. In recent years, the researchers have focused on the pose illumination, face recognition, etc,. The traditional methods of face recognition focus on Open CV’s fisher faces which results in analyzing the face expressions and attributes. Deep learning method used in this proposed system is Convolutional Neural Network (CNN). Proposed work includes the following modules: [1] Face Detection [2] Gender Recognition [3] Age Prediction. Thus the results obtained from this work prove that real time age and gender detection using CNN provides better accuracy results compared to other existing approaches.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2022 ◽  
pp. 1-27
Author(s):  
Clifford Bohm ◽  
Douglas Kirkpatrick ◽  
Arend Hintze

Abstract Deep learning (primarily using backpropagation) and neuroevolution are the preeminent methods of optimizing artificial neural networks. However, they often create black boxes that are as hard to understand as the natural brains they seek to mimic. Previous work has identified an information-theoretic tool, referred to as R, which allows us to quantify and identify mental representations in artificial cognitive systems. The use of such measures has allowed us to make previous black boxes more transparent. Here we extend R to not only identify where complex computational systems store memory about their environment but also to differentiate between different time points in the past. We show how this extended measure can identify the location of memory related to past experiences in neural networks optimized by deep learning as well as a genetic algorithm.


IEEE Spectrum ◽  
2021 ◽  
Vol 58 (10) ◽  
pp. 32-33
Author(s):  
Samuel K. Moore ◽  
David Schneider ◽  
Eliza Strickland

Author(s):  
Héctor A. Sánchez-Hevia ◽  
Roberto Gil-Pita ◽  
Manuel Utrilla-Manso ◽  
Manuel Rosa-Zurera

Author(s):  
Shivkaran Ravidas ◽  
M. A. Ansari

<span lang="EN-US">In the recent past, convolutional neural networks (CNNs) have seen resurgence and have performed extremely well on vision tasks.  Visually the model resembles a series of layers each of which is processed by a function to form a next layer. It is argued that CNN first models the low level features such as edges and joints and then expresses higher level features as a composition of these low level features. The aim of this paper is to detect multi-view faces using deep convolutional neural network (DCNN). Implementation, detection and retrieval of faces will be obtained with the help of direct visual matching technology. Further, the probabilistic measure of the similarity of the face images will be done using Bayesian analysis. Experiment detects faces with ±90 degree out of plane rotations. Fine tuned AlexNet is used to detect pose invariant faces. For this work, we extracted examples of training from AFLW (Annotated Facial Landmarks in the Wild) dataset that involve 21K images with 24K annotations of the face.</span>


Sign in / Sign up

Export Citation Format

Share Document