Kernel-PCA for face recognition in different color spaces

Author(s):  
Hala M. Ebied
Author(s):  
PEICHUNG SHIH ◽  
CHENGJUN LIU

Content-based face image retrieval is concerned with computer retrieval of face images (of a given subject) based on the geometric or statistical features automatically derived from these images. It is well known that color spaces provide powerful information for image indexing and retrieval by means of color invariants, color histogram, color texture, etc. This paper assesses comparatively the performance of content-based face image retrieval in different color spaces using a standard algorithm, the Principal Component Analysis (PCA), which has become a popular algorithm in the face recognition community. In particular, we comparatively assess 12 color spaces (RGB, HSV, YUV, YCbCr, XYZ, YIQ, L*a*b*, U*V*W*, L*u*v*, I1I2I3, HSI, and rgb) by evaluating seven color configurations for every single color space. A color configuration is defined by an individual or a combination of color component images. Take the RGB color space as an example, possible color configurations are R, G, B, RG, RB, GB and RGB. Experimental results using 600 FERET color images corresponding to 200 subjects and 456 FRGC (Face Recognition Grand Challenge) color images of 152 subjects show that some color configurations, such as YV in the YUV color space and YI in the YIQ color space, help improve face retrieval performance.


2004 ◽  
Vol 10 (1) ◽  
pp. 23-30 ◽  
Author(s):  
Maojun Zhang ◽  
Nicolas D. Georganas

2001 ◽  
Author(s):  
J. Birgitta Martinkauppi ◽  
Maricor N. Soriano ◽  
Mika V. Laaksonen

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 27389-27400 ◽  
Author(s):  
Wilson Castro ◽  
Jimy Oblitas ◽  
Miguel De-La-Torre ◽  
Carlos Cotrina ◽  
Karen Bazan ◽  
...  

2013 ◽  
Vol 64 (3) ◽  
pp. 35-38 ◽  
Author(s):  
Sudeep D.Thepade ◽  
Krishnasagar Subhedarpage ◽  
Ankur A. Mali ◽  
Tushar S. Vaidya

Author(s):  
Jae Young Choi

Recently, considerable research efforts have been devoted to effective utilization of facial color information for improved recognition performance. Of all color-based face recognition (FR) methods, the most widely used approach is a color FR method using input-level fusion. In this method, augmented input vectors of the color images are first generated by concatenating different color components (including both luminance and chrominance information) by column order at the input level and feature subspace is then trained with a set of augmented input vectors. However, in practical applications, a testing image could be captured as a grayscale image, rather than as a color image, mainly caused by different, heterogeneous image acquisition environment. A grayscale testing image causes so-called dimensionality mismatch between the trained feature subspace and testing input vector. Disparity in dimensionality negatively impacts the reliable FR performance and even imposes a significant restriction on carrying out FR operations in practical color FR systems. To resolve the dimensionality mismatch, we propose a novel approach to estimate new feature subspace, suitable for recognizing a grayscale testing image. In particular, new feature subspace is estimated from a given feature subspace created using color training images. The effectiveness of proposed solution has been successfully tested on four public face databases (DBs) such as CMU, FERET, XM2VTSDB, and ORL DBs. Extensive and comparative experiments showed that the proposed solution works well for resolving dimensionality mismatch of importance in real-life color FR systems.


Sign in / Sign up

Export Citation Format

Share Document