face subspace
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 3)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Ali Abbasi ◽  
Mohammad Rahmati

Over the past few decades, numerous attempts have been made to address the problem of recovering a high-resolution (HR) facial image from its corresponding low-resolution (LR) counterpart, a task commonly referred to as face hallucination. Despite the impressive performance achieved by position-patch and deep learning-based methods, most of these techniques are still unable to recover identity-specific features of faces. The former group of algorithms often produces blurry and oversmoothed outputs particularly in the presence of higher levels of degradation, whereas the latter generates faces which sometimes by no means resemble the individuals in the input images. In this paper, a novel face super-resolution approach will be introduced, in which the hallucinated face is forced to lie in a subspace spanned by the available training faces. Therefore, in contrast to the majority of existing face hallucination techniques and thanks to this <i>face subspace prior</i>, the reconstruction is performed in favor of recovering person-specific facial features, rather than merely increasing image quantitative scores. Furthermore, inspired by recent advances in the area of 3D face reconstruction, an efficient 3D dictionary alignment scheme is also presented, through which the algorithm becomes capable of dealing with low-resolution faces taken in uncontrolled conditions. In extensive experiments carried out on several well-known face datasets, the proposed algorithm shows remarkable performance by generating detailed and close to ground truth results which outperform the state-of-the-art face hallucination algorithms by significant margins both in quantitative and qualitative evaluations.


2021 ◽  
Author(s):  
Ali Abbasi ◽  
Mohammad Rahmati

Over the past few decades, numerous attempts have been made to address the problem of recovering a high-resolution (HR) facial image from its corresponding low-resolution (LR) counterpart, a task commonly referred to as face hallucination. Despite the impressive performance achieved by position-patch and deep learning-based methods, most of these techniques are still unable to recover identity-specific features of faces. The former group of algorithms often produces blurry and oversmoothed outputs particularly in the presence of higher levels of degradation, whereas the latter generates faces which sometimes by no means resemble the individuals in the input images. In this paper, a novel face super-resolution approach will be introduced, in which the hallucinated face is forced to lie in a subspace spanned by the available training faces. Therefore, in contrast to the majority of existing face hallucination techniques and thanks to this <i>face subspace prior</i>, the reconstruction is performed in favor of recovering person-specific facial features, rather than merely increasing image quantitative scores. Furthermore, inspired by recent advances in the area of 3D face reconstruction, an efficient 3D dictionary alignment scheme is also presented, through which the algorithm becomes capable of dealing with low-resolution faces taken in uncontrolled conditions. In extensive experiments carried out on several well-known face datasets, the proposed algorithm shows remarkable performance by generating detailed and close to ground truth results which outperform the state-of-the-art face hallucination algorithms by significant margins both in quantitative and qualitative evaluations.


2021 ◽  
Author(s):  
Liang She ◽  
Marcus K. Benna ◽  
Yuelin Shi ◽  
Stefano Fusi ◽  
Doris Y. Tsao

AbstractThe ability to recognize familiar visual objects is critical to survival. Neurons in inferotemporal (IT) cortex represent the percept of visual objects using a distributed axis code. However, the network code for the memory of visual objects remains elusive. Here, we measured neuronal responses to familiar and unfamiliar faces in two face patches, AM and PR. In both areas, familiar and unfamiliar faces were represented in distinct subspaces. The familiar face subspace was shifted relative to the unfamiliar face subspace at short latency and then distorted to increase neural distances between familiar faces at long latency. Our results suggest that memories of familiar faces are represented in IT and perirhinal cortex by a distinct long-latency code that is optimized to distinguish familiar identities.One Sentence SummaryFamiliar and unfamiliar faces are represented in distinct neural subspaces, with the former represented by a unique long-latency geometry.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Pang Ying Han ◽  
Andrew Teoh Beng Jin ◽  
Lim Heng Siong

Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF), is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.


2011 ◽  
pp. 51-77 ◽  
Author(s):  
Wei Bian ◽  
Dacheng Tao

Sign in / Sign up

Export Citation Format

Share Document