Client Specific Image Gradient Orientation for Unimodal and Multimodal Face Representation

Author(s):  
He-Feng Yin ◽  
Xiao-Jun Wu ◽  
Xiao-Qi Sun
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yunjun Nam ◽  
Takayuki Sato ◽  
Go Uchida ◽  
Ekaterina Malakhova ◽  
Shimon Ullman ◽  
...  

AbstractHumans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


1998 ◽  
Vol 06 (03) ◽  
pp. 265-279 ◽  
Author(s):  
Shimon Edelman

The paper outlines a computational approach to face representation and recognition, inspired by two major features of biological perceptual systems: graded-profile overlapping receptive fields, and object-specific responses in the higher visual areas. This approach, according to which a face is ultimately represented by its similarities to a number of reference faces, led to the development of a comprehensive theory of object representation in biological vision, and to its subsequent psychophysical exploration and computational modeling.


Author(s):  
BIN XU ◽  
YUAN YAN TANG ◽  
BIN FANG ◽  
ZHAO WEI SHANG

In this paper, a novel approach derived from image gradient domain called multi-scale gradient faces (MGF) is proposed to abstract multi-scale illumination-insensitive measure for face recognition. MGF applies multi-scale analysis on image gradient information, which can discover underlying inherent structure in images and keep the details at most while removing varying lighting. The proposed approach provides state-of-the-art performance on Extended YaleB and PIE: Recognition rates of 99.11% achieved on PIE database and 99.38% achieved on YaleB which outperforms most existing approaches. Furthermore, the experimental results on noised Yale-B validate that MGF is more robust to image noise.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6525
Author(s):  
Beiwei Zhang ◽  
Yudong Zhang ◽  
Jinliang Liu ◽  
Bin Wang

Gesture recognition has been studied for decades and still remains an open problem. One important reason is that the features representing those gestures are not sufficient, which may lead to poor performance and weak robustness. Therefore, this work aims at a comprehensive and discriminative feature for hand gesture recognition. Here, a distinctive Fingertip Gradient orientation with Finger Fourier (FGFF) descriptor and modified Hu moments are suggested on the platform of a Kinect sensor. Firstly, two algorithms are designed to extract the fingertip-emphasized features, including palm center, fingertips, and their gradient orientations, followed by the finger-emphasized Fourier descriptor to construct the FGFF descriptors. Then, the modified Hu moment invariants with much lower exponents are discussed to encode contour-emphasized structure in the hand region. Finally, a weighted AdaBoost classifier is built based on finger-earth mover’s distance and SVM models to realize the hand gesture recognition. Extensive experiments on a ten-gesture dataset were carried out and compared the proposed algorithm with three benchmark methods to validate its performance. Encouraging results were obtained considering recognition accuracy and efficiency.


Sign in / Sign up

Export Citation Format

Share Document