scholarly journals Gender Recognition from a Partial View of the Face Using Local Feature Vectors

Author(s):  
Yasmina Andreu ◽  
Ramón A. Mollineda ◽  
Pedro García-Sevilla
Author(s):  
Olasimbo Ayodeji Arigbabu ◽  
Sharifah Mumtazah Syed Ahmad ◽  
Wan Azizun Wan Adnan ◽  
Saif Mahmood

Gender recognition from unconstrained face images is a challenging task due to the high degree of misalignment, pose, expression, and illumination variation. In previous works, the recognition of gender from unconstrained face images is approached by utilizing image alignment, exploiting multiple samples per individual to improve the learning ability of the classifier, or learning gender based on prior knowledge about pose and demographic distributions of the dataset. However, image alignment increases the complexity and time of computation, while the use of multiple samples or having prior knowledge about data distribution is unrealistic in practical applications. This paper presents an approach for gender recognition from unconstrained face images. Our technique exploits the robustness of local feature descriptor to photometric variations to extract the shape description of the 2D face image using a single sample image per individual. The results obtained from experiments on Labeled Faces in the Wild (LFW) dataset describe the effectiveness of the proposed method. The essence of this study is to investigate the most suitable functions and parameter settings for recognizing gender from unconstrained face images.  


2011 ◽  
Vol 64 (1) ◽  
pp. 197-218 ◽  
Author(s):  
Mahmoud Mejdoub ◽  
Chokri Ben Amar

2021 ◽  
Vol 4 (1) ◽  
pp. 60-90
Author(s):  
Mehshan Ahad ◽  
◽  
Muhammad Fayyaz

Human gender recognition is one the most challenging task in computer vision, especially in pedestrians, due to so much variation in human poses, video acquisition, illumination, occlusion, and human clothes, etc. In this article, we have considered gender recognition which is very important to be considered in video surveillance. To make the system automated to recognize the gender, we have provided a novel technique based on the extraction of features through different methodologies. Our technique consists of 4 steps a) preprocessing, b) feature extraction, c) feature fusion, d) classification. The exciting area is separated in the first step, which is the full body from the images. After that, images are divided into two halves on the ratio of 2:3 to acquire sets of upper body and lower body. In the second step, three handcrafted feature extractors, HOG, Gabor, and granulometry, extract the feature vectors using different score values. These feature vectors are fused to create one strong feature vector on which results are evaluated. Experiments are performed on full-body datasets to make the best configuration of features. The features are extracted through different feature extractors in different numbers to generate their feature vectors. Those features are fused to create a strong feature vector. This feature vector is then utilized for classification. For classification, SVM and KNN classifiers are used. Results are evaluated on five performance measures: Accuracy, Precision, Sensitivity, Specificity, and Area under the curve. The best results that have been acquired are on the upper body, which is 88.7% accuracy and 0.96 AUC. The results are compared with the existing methodologies, and hence it is concluded that the proposed method has significantly achieved higher results.


Author(s):  
Antonio Greco ◽  
Alessia Saggese ◽  
Mario Vento ◽  
Vincenzo Vigilante

AbstractIn the era of deep learning, the methods for gender recognition from face images achieve remarkable performance over most of the standard datasets. However, the common experimental analyses do not take into account that the face images given as input to the neural networks are often affected by strong corruptions not always represented in standard datasets. In this paper, we propose an experimental framework for gender recognition “in the wild”. We produce a corrupted version of the popular LFW+ and GENDER-FERET datasets, that we call LFW+C and GENDER-FERET-C, and evaluate the accuracy of nine different network architectures in presence of specific, suitably designed, corruptions; in addition, we perform an experiment on the MIVIA-Gender dataset, recorded in real environments, to analyze the effects of mixed image corruptions happening in the wild. The experimental analysis demonstrates that the robustness of the considered methods can be further improved, since all of them are affected by a performance drop on images collected in the wild or manually corrupted. Starting from the experimental results, we are able to provide useful insights for choosing the best currently available architecture in specific real conditions. The proposed experimental framework, whose code is publicly available, is general enough to be applicable also on different datasets; thus, it can act as a forerunner for future investigations.


2008 ◽  
Vol 19 (12) ◽  
pp. 1242-1246 ◽  
Author(s):  
Adrian Nestor ◽  
Michael J. Tarr

A continuing question in the object recognition literature is whether surface properties play a role in visual representation and recognition. Here, we examined the use of color as a cue in facial gender recognition by applying a version of reverse correlation to face categorization in CIE L∗a∗b∗ color space. We found that observers exploited color information to classify ambiguous signals embedded in chromatic noise. The method also allowed us to identify the specific spatial locations and the components of color used by observers. Although the color patterns found with human observers did not accurately mirror objective natural color differences, they suggest sensitivity to the contrast between the main features and the rest of the face. Overall, the results provide evidence that observers encode and can use the local color properties of faces, in particular, in tasks in which color provides diagnostic information and the availability of other cues is reduced.


2020 ◽  
Vol 34 (10) ◽  
pp. 13889-13890
Author(s):  
Thomas Paniagua ◽  
John Lagergren ◽  
Greg Foderaro

This paper presents a novel deconvolution mechanism, called the Sparse Deconvolution, that generalizes the classical transpose convolution operation to sparse unstructured domains, enabling the fast and accurate generation and upsampling of point clouds and other irregular data. Specifically, the approach uses deconvolutional kernels, which each map an input feature vector and set of trainable scalar weights to the feature vectors of multiple child output elements. Unlike previous approaches, the Sparse Deconvolution does not require any voxelization or structured formulation of data, it is scalable to a large number of elements, and it is capable of utilizing local feature information. As a result, these capabilities allow for the practical generation of unstructured data in unsupervised settings. Preliminary experiments are performed here, where Sparse Deconvolution layers are used as a generator within an autoencoder trained on the 3D MNIST dataset.


2015 ◽  
Vol 14 ◽  
pp. 111-112 ◽  
Author(s):  
Olasimbo Ayodeji Arigbabu ◽  
Sharifah Mumtazah Syed Ahmad ◽  
Wan Azizun Wan Adnan ◽  
Saif Mahmood ◽  
Salman Yussof

Gender is one striking feature that human can deduce effortlessly when looking at a face. Here, we try to classify the gender (male or female) based on the face images. The first part of this paper presents a review of different methods/approaches used for gender recognition. We present a comparative analysis for gender recognition using PCA, 2dPCA and its variants. Finally, we develop an iterative model using 2dPCA which updates itself when new samples are encountered. This model is expected to be fruitful in real-life situation as it can learn when it comes across new test samples. We consider CFD, CUHK, ORL and Yale facial data-sets for our experiments.


2021 ◽  
Author(s):  
Yatao Yang ◽  
Qilin Zhang ◽  
Wenbin Gao ◽  
Chenghao Fan ◽  
Qinyuan Shu ◽  
...  

Abstract Face recognition is playing an increasingly important role in present society, and suffers from the privacy leakage in plaintext. Therefore, a recognition system based on homomorphic encryption that supports privacy preservation is designed and implemented in this paper. This system uses the CKKS algorithm in the SEAL library, Microsoft’s latest homomorphic encryption achievement, to encrypt the normalized face feature vectors, and uses the FaceNet neural network to learn on the image’s ciphertext to achieve face classification. Finally, face recognition in ciphertext is accomplished. After been tested, the whole process of extracting feature vectors and encrypting a face image takes only about 1.712s in the developed system. The average time to compare a group of images in ciphertext is about 2.06s, and a group of images can be effectively recognized within 30 degrees of face bias, the identification accuracy can reach 96.71%. Compared with the face recognition scheme based on the Advanced Encryption Standard(AES) encryption algorithm in ciphertext proposed by Wang et al. in 2019, our scheme improves the recognition accuracy by 4.21%. Compared with the image recognition scheme based on Elliptical encryption algorithm in ciphertext proposed by Kumar S et al. in 2018, the total time in our system is decreased by 76.2%. Therefore, this scheme has better operational efficiency and practical value while ensuring the users’ personal privacy. Compared with the face recognition system in plaintext presented in recent years, our scheme has almost the same level on recognition accuracy and time efficiency.


Sign in / Sign up

Export Citation Format

Share Document