Zernike Moments and Machine Learning Based Gender Classification Using Facial Images

Author(s):  
Vijayalakshmi G. V. Mahesh ◽  
Alex Noel Joseph Raj
2013 ◽  
Vol 221 ◽  
pp. 98-109 ◽  
Author(s):  
Wen-Sheng Chu ◽  
Chun-Rong Huang ◽  
Chu-Song Chen

2021 ◽  
Vol 8 ◽  
Author(s):  
Shivanand S. Gornale ◽  
Sathish Kumar ◽  
Abhijit Patil ◽  
Prakash S. Hiremath

Biometric security applications have been employed for providing a higher security in several access control systems during the past few years. The handwritten signature is the most widely accepted behavioral biometric trait for authenticating the documents like letters, contracts, wills, MOU’s, etc. for validation in day to day life. In this paper, a novel algorithm to detect gender of individuals based on the image of their handwritten signatures is proposed. The proposed work is based on the fusion of textural and statistical features extracted from the signature images. The LBP and HOG features represent the texture. The writer’s gender classification is carried out using machine learning techniques. The proposed technique is evaluated on own dataset of 4,790 signatures and realized an encouraging accuracy of 96.17, 98.72 and 100% for k-NN, decision tree and Support Vector Machine classifiers, respectively. The proposed method is expected to be useful in design of efficient computer vision tools for authentication and forensic investigation of documents with handwritten signatures.


Author(s):  
Fadhlan Hafizhelmi Kamaru Zaman

Gender classification demonstrates high accuracy in many previous works. However, it does not generalize very well in unconstrained settings and environments. Furthermore, many proposed Convolutional Neural Network (CNN) based solutions vary significantly in their characteristics and architectures, which calls for optimal CNN architecture for this specific task. In this work, a hand-crafted, custom CNN architecture is proposed to distinguish between male and female facial images. This custom CNN requires smaller input image resolutions and significantly fewer trainable parameters than some popular state-of-the-arts such as GoogleNet and AlexNet. It also employs batch normalization layers which results in better computation efficiency. Based on experiments using publicly available datasets such as LFW, CelebA and IMDB-WIKI datasets, the proposed custom CNN delivered the fastest inference time in all tests, where it needs only 0.92ms to classify 1200 images on GPU, 1.79ms on CPU, and 2.51ms on VPU. The custom CNN also delivers performance on-par with state-of-the-arts and even surpassed these methods in CelebA gender classification where it delivered the best result at 96% accuracy. Moreover, in a more challenging cross-dataset inference, custom CNN trained using CelebA dataset gives the best gender classification accuracy for tests on IMDB and WIKI datasets at 97% and 96% accuracy respectively.


2007 ◽  
Vol 17 (06) ◽  
pp. 479-487 ◽  
Author(s):  
HUI-CHENG LIAN ◽  
BAO-LIANG LU

In this paper, we present a novel method for multi-view gender classification considering both shape and texture information to represent facial images. The face area is divided into small regions from which local binary pattern (LBP) histograms are extracted and concatenated into a single vector efficiently representing a facial image. Following the idea of local binary pattern, we propose a new feature extraction approach called multi-resolution LBP, which can retain both fine and coarse local micro-patterns and spatial information of facial images. The classification tasks in this work are performed by support vector machines (SVMs). The experiments clearly show the superiority of the proposed method over both support gray faces and support Gabor faces on the CAS-PEAL face database. A higher correct classification rate of 96.56% and a higher cross validation average accuracy of 95.78% have been obtained. In addition, the simplicity of the proposed method leads to very fast feature extraction, and the regional histograms and fine-to-coarse description of facial images allow for multi-view gender classification.


2021 ◽  
Author(s):  
Raz Mohammad Sahar ◽  
T. Srivinasa Rao ◽  
S. Anuradha ◽  
B. Srinivasa Rao

Gender classification is amongst the significant problems in the area of signal processing; previously, the problem was handled using different image classification methods, which mainly involve data extraction from a collection of images. Nevertheless, researchers over the globe have recently shown interest in gender classification using voiced features. The classification of gender goes beyond just the frequency and pitch of a human voice, according to a critical study of some of the human vocal attributes. Feature selection, which is from a technical point of view termed dimensionality reduction, is amongst the difficult problems encountered in machine learning. A similar obstacle is encountered when choosing gender particular features—which presents an analytical purpose in analyzing a human’s gender. This work will examine the effectiveness and importance of classification algorithms to the classification of gender via voice problems. Audial data, for example, pitch, frequency, etc., help in determining gender. Machine learning offers encouraging outcomes for classification problems in all domains. An area’s algorithms can be evaluated using performance metrics. This paper evaluates five different classification Algorithms of machine learning based on the classification of gender from audial data. The plan is to recognize gender using five different algorithms: Gradient Boosting, Decision Trees, Random Forest, Neural network, and Support Vector Machine. The major parameter in assessing any algorithm must be performance. Misclassifying rate ratio should not be more in classifying problems. In business markets, the location and gender of people are essentially related to AdSense. This research aims at comparing various machine learning algorithms in order to find the most suitable fitting for gender identification in audial data.


Sign in / Sign up

Export Citation Format

Share Document