Gender Recognition Using Global Feature and Selected Local Feature from Bag of Facial Component

2021 ◽  
Author(s):  
Tjokorda Agung Budi Wirayuda ◽  
Rinaldi Munir ◽  
Achmad Imam Kistijantoro
Author(s):  
Olasimbo Ayodeji Arigbabu ◽  
Sharifah Mumtazah Syed Ahmad ◽  
Wan Azizun Wan Adnan ◽  
Saif Mahmood

Gender recognition from unconstrained face images is a challenging task due to the high degree of misalignment, pose, expression, and illumination variation. In previous works, the recognition of gender from unconstrained face images is approached by utilizing image alignment, exploiting multiple samples per individual to improve the learning ability of the classifier, or learning gender based on prior knowledge about pose and demographic distributions of the dataset. However, image alignment increases the complexity and time of computation, while the use of multiple samples or having prior knowledge about data distribution is unrealistic in practical applications. This paper presents an approach for gender recognition from unconstrained face images. Our technique exploits the robustness of local feature descriptor to photometric variations to extract the shape description of the 2D face image using a single sample image per individual. The results obtained from experiments on Labeled Faces in the Wild (LFW) dataset describe the effectiveness of the proposed method. The essence of this study is to investigate the most suitable functions and parameter settings for recognizing gender from unconstrained face images.  


2013 ◽  
Vol 756-759 ◽  
pp. 4026-4030 ◽  
Author(s):  
Jian Bin Lin ◽  
Ming Quan Zhou ◽  
Zhong Ke Wu

This paper presents a novel method to extract edge lines from point clouds of these eroded, rough fractured fragments. Firstly, a principal component analysis based method is used to extract feature points, followed by clustering of these feature points. Secondly, a local feature lines fragment is constructed for each cluster and afterwards a smooth and noise pruning process for each local feature lines fragment. Thirdly, these separated local feature lines fragments are connected and bridged in order to eliminate the gaps caused by the eroded regions and construct completed global feature lines. Fourthly, a noise pruning process is performed. The output of this method is completed, smoothed edge feature lines. We illustrate the performance of our method on a number of real-world examples.


2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Wei Sun ◽  
Xiaorui Zhang ◽  
Shunshun Shi ◽  
Jun He ◽  
Yan Jin

This study proposes a new vehicle type recognition method that combines global and local features via a two-stage classification. To extract the continuous and complete global feature, an improved Canny edge detection algorithm with smooth filtering and non-maxima suppression abilities is proposed. To extract the local feature from four partitioned key patches, a set of Gabor wavelet kernels with five scales and eight orientations is introduced. Different from the single-stage classification, where all features are incorporated into one classifier simultaneously, the proposed two-stage classification strategy leverages two types of features and classifiers. In the first stage, the preliminary recognition of large vehicle or small vehicle is conducted based on the global feature via a k-nearest neighbor probability classifier. Based on the preliminary result, the specific recognition of bus, truck, van, or sedan is achieved based on the local feature via a discriminative sparse representation based classifier. We experiment with the proposed method on the public and established datasets involving various challenging cases, such as partial occlusion, poor illumination, and scale variation. Experimental results show that the proposed method outperforms existing state-of-the-art methods.


Author(s):  
Wen-Sheng Chen ◽  
Xiuli Dai ◽  
Binbin Pan ◽  
Yuan Yan Tang

In face recognition (FR), a lot of algorithms just utilize one single type of facial features namely global feature or local feature, and cannot obtain better performance under the complicated variations of the facial images. To extract robust facial features, this paper proposes a novel Semi-Supervised Discriminant Analysis (SSDA) criterion via nonlinearly combining the global feature and local feature. To further enhance the discriminant power of SSDA features, the geometric distribution weight information of the training data is also incorporated into the proposed criterion. We use SSDA criterion to design an iterative algorithm which can determine the combination parameters and the optimal projection matrix automatically. Moreover, the combination parameters are guaranteed to fall into the interval [0, 1]. The proposed SSDA method is evaluated on the ORL, FERET and CMU PIE face databases. The experimental results demonstrate that our method achieves superior performance.


2020 ◽  
Vol 34 (07) ◽  
pp. 10567-10574
Author(s):  
Qingchao Chen ◽  
Yang Liu

Unsupervised domain Adaptation (UDA) aims to learn and transfer generalized features from a labelled source domain to a target domain without any annotations. Existing methods only aligning high-level representation but without exploiting the complex multi-class structure and local spatial structure. This is problematic as 1) the model is prone to negative transfer when the features from different classes are misaligned; 2) missing the local spatial structure poses a major obstacle in performing the fine-grained feature alignment. In this paper, we integrate the valuable information conveyed in classifier prediction and local feature maps into global feature representation and then perform a single mini-max game to make it domain invariant. In this way, the domain-invariant feature not only describes the holistic representation of the original image but also preserves mode-structure and fine-grained spatial structural information. The feature integration is achieved by estimating and maximizing the mutual information (MI) among the global feature, local feature and classifier prediction simultaneously. As the MI is hard to measure directly in high-dimension spaces, we adopt a new objective function that implicitly maximizes the MI via an effective sampling strategy and a discriminator design. Our STructure-Aware Feature Fusion (STAFF) network achieves the state-of-the-art performances in various UDA datasets.


2015 ◽  
Vol 14 ◽  
pp. 111-112 ◽  
Author(s):  
Olasimbo Ayodeji Arigbabu ◽  
Sharifah Mumtazah Syed Ahmad ◽  
Wan Azizun Wan Adnan ◽  
Saif Mahmood ◽  
Salman Yussof

Sign in / Sign up

Export Citation Format

Share Document