Efficient and accurate 3D modeling based on a novel local feature descriptor

2020 ◽  
Vol 512 ◽  
pp. 295-314 ◽  
Author(s):  
Bao Zhao ◽  
Juntong Xi
2019 ◽  
Vol 74 ◽  
pp. 101771 ◽  
Author(s):  
Masoumeh Rezaei ◽  
Mehdi Rezaeian ◽  
Vali Derhami ◽  
Ferdous Sohel ◽  
Mohammed Bennamoun

2016 ◽  
Vol 194 ◽  
pp. 157-167 ◽  
Author(s):  
Pu Yan ◽  
Dong Liang ◽  
Jun Tang ◽  
Ming Zhu

Author(s):  
Olasimbo Ayodeji Arigbabu ◽  
Sharifah Mumtazah Syed Ahmad ◽  
Wan Azizun Wan Adnan ◽  
Saif Mahmood

Gender recognition from unconstrained face images is a challenging task due to the high degree of misalignment, pose, expression, and illumination variation. In previous works, the recognition of gender from unconstrained face images is approached by utilizing image alignment, exploiting multiple samples per individual to improve the learning ability of the classifier, or learning gender based on prior knowledge about pose and demographic distributions of the dataset. However, image alignment increases the complexity and time of computation, while the use of multiple samples or having prior knowledge about data distribution is unrealistic in practical applications. This paper presents an approach for gender recognition from unconstrained face images. Our technique exploits the robustness of local feature descriptor to photometric variations to extract the shape description of the 2D face image using a single sample image per individual. The results obtained from experiments on Labeled Faces in the Wild (LFW) dataset describe the effectiveness of the proposed method. The essence of this study is to investigate the most suitable functions and parameter settings for recognizing gender from unconstrained face images.  


Sign in / Sign up

Export Citation Format

Share Document