face datasets
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 15)

H-INDEX

8
(FIVE YEARS 0)

Author(s):  
Ziyi Kou ◽  
Lanyu Shang ◽  
Huimin Zeng ◽  
Yang Zhang ◽  
Dong Wang


2021 ◽  
Author(s):  
Ali Abbasi ◽  
Mohammad Rahmati

Over the past few decades, numerous attempts have been made to address the problem of recovering a high-resolution (HR) facial image from its corresponding low-resolution (LR) counterpart, a task commonly referred to as face hallucination. Despite the impressive performance achieved by position-patch and deep learning-based methods, most of these techniques are still unable to recover identity-specific features of faces. The former group of algorithms often produces blurry and oversmoothed outputs particularly in the presence of higher levels of degradation, whereas the latter generates faces which sometimes by no means resemble the individuals in the input images. In this paper, a novel face super-resolution approach will be introduced, in which the hallucinated face is forced to lie in a subspace spanned by the available training faces. Therefore, in contrast to the majority of existing face hallucination techniques and thanks to this <i>face subspace prior</i>, the reconstruction is performed in favor of recovering person-specific facial features, rather than merely increasing image quantitative scores. Furthermore, inspired by recent advances in the area of 3D face reconstruction, an efficient 3D dictionary alignment scheme is also presented, through which the algorithm becomes capable of dealing with low-resolution faces taken in uncontrolled conditions. In extensive experiments carried out on several well-known face datasets, the proposed algorithm shows remarkable performance by generating detailed and close to ground truth results which outperform the state-of-the-art face hallucination algorithms by significant margins both in quantitative and qualitative evaluations.



2021 ◽  
Author(s):  
Ali Abbasi ◽  
Mohammad Rahmati

Over the past few decades, numerous attempts have been made to address the problem of recovering a high-resolution (HR) facial image from its corresponding low-resolution (LR) counterpart, a task commonly referred to as face hallucination. Despite the impressive performance achieved by position-patch and deep learning-based methods, most of these techniques are still unable to recover identity-specific features of faces. The former group of algorithms often produces blurry and oversmoothed outputs particularly in the presence of higher levels of degradation, whereas the latter generates faces which sometimes by no means resemble the individuals in the input images. In this paper, a novel face super-resolution approach will be introduced, in which the hallucinated face is forced to lie in a subspace spanned by the available training faces. Therefore, in contrast to the majority of existing face hallucination techniques and thanks to this <i>face subspace prior</i>, the reconstruction is performed in favor of recovering person-specific facial features, rather than merely increasing image quantitative scores. Furthermore, inspired by recent advances in the area of 3D face reconstruction, an efficient 3D dictionary alignment scheme is also presented, through which the algorithm becomes capable of dealing with low-resolution faces taken in uncontrolled conditions. In extensive experiments carried out on several well-known face datasets, the proposed algorithm shows remarkable performance by generating detailed and close to ground truth results which outperform the state-of-the-art face hallucination algorithms by significant margins both in quantitative and qualitative evaluations.



2021 ◽  
Author(s):  
Dinh Tan Nguyen ◽  
Cao Truong Tran ◽  
Trung Thanh Nguyen ◽  
Cao Bao Hoang ◽  
Van Phu Luu ◽  
...  


2021 ◽  
pp. 1-13
Author(s):  
Junying Chen ◽  
Shipeng Liu ◽  
Liang Zhao ◽  
Dengfeng Chen ◽  
Weihua Zhang

Since small objects occupy less pixels in the image and are difficult to recognize. Small object detection has always been a research difficulty in the field of computer vision. Aiming at the problems of low sensitivity and poor detection performance of YOLOv3 for small objects. AFYOLO, which is more sensitive to small objects detection was proposed in this paper. Firstly, the DenseNet module is introduced into the low-level layers of backbone to enhance the transmission ability of objects information. At the same time, a new mechanism combining channel attention and spatial attention is introduced to improve the feature extraction ability of the backbone. Secondly, a new feature pyramid network (FPN) is proposed to better obtain the features of small objects. Finally, ablation studies on ImageNet classification task and MS-COCO object detection task verify the effectiveness of the proposed attention module and FPN. The results on Wider Face datasets show that the AP of the proposed method is 11.89%higher than that of YOLOv3 and 8.59%higher than that of YOLOv4. All of results show that AFYOLO has better ability for small object detection.



Author(s):  
Elham Vahdati ◽  
Ching Y. Suen

Automatic analysis of facial beauty has become an emerging computer vision problem in recent years. Facial beauty prediction (FBP) aims at developing a human-like model that automatically makes facial attractiveness predictions. In this study, we present and evaluate a face attractiveness prediction approach using facial parts as well as a multi-task learning scheme. First, a deep convolutional neural network (CNN) pre-trained on massive face datasets is utilized for face attractiveness prediction, which is capable of automatic learning of high-level face representations. Next, we extend our deep model to other facial attribute recognition tasks. Hence, a multi-task learning scheme is leveraged by our deep model to learn optimal shared features for three correlated tasks (i.e. facial beauty assessment, gender recognition as well as ethnicity identification). To further enhance the attractiveness computation accuracy, specific regions of face images (i.e. left eye, nose and mouth) as well as the whole face are fed into multi-stream CNNs (i.e. three two-stream networks). Each two-stream network adopts a facial part as well as the full face as input. Extensive experiments are conducted on the SCUT-FBP5500 benchmark dataset, where our approach indicates significant improvement in accuracy over the other state-of-the-art methods.



Author(s):  
A. Taneja ◽  
K.S. Yadav ◽  
S. Patra ◽  
Yogesh


Author(s):  
Xingbo Dong ◽  
Soohyong Kim ◽  
Zhe Jin ◽  
Jung Yeon Hwang ◽  
Sangrae Cho ◽  
...  

Biometric cryptosystems such as fuzzy vaults represent one of the most popular approaches for secret and biometric template protection. However, they are solely designed for biometric verification, where the user is required to input both identity credentials and biometrics. Several practical questions related to the implementation of biometric cryptosystems remain open, especially in regard to biometric template protection. In this article, we propose a face cryptosystem for identification (FCI) in which only biometric input is needed. Our FCI is composed of a one-to-N search subsystem for template protection and a one-to-one match chaff-less fuzzy vault (CFV) subsystem for secret protection. The first subsystem stores N facial features, which are protected by index-of-maximum (IoM) hashing, enhanced by a fusion module for search accuracy. When a face image of the user is presented, the subsystem returns the top k matching scores and activates the corresponding vaults in the CFV subsystem. Then, one-to-one matching is applied to the k vaults based on the probe face, and the identifier or secret associated with the user is retrieved from the correct matched vault. We demonstrate that coupling between the IoM hashing and the CFV resolves several practical issues related to fuzzy vault schemes. The FCI system is evaluated on three large-scale public unconstrained face datasets (LFW, VGG2, and IJB-C) in terms of its accuracy, computation cost, template protection criteria, and security.



Author(s):  
Hung Phuoc Truong ◽  
Thanh Phuong Nguyen ◽  
Yong-Guk Kim

AbstractWe present a novel framework for efficient and robust facial feature representation based upon Local Binary Pattern (LBP), called Weighted Statistical Binary Pattern, wherein the descriptors utilize the straight-line topology along with different directions. The input image is initially divided into mean and variance moments. A new variance moment, which contains distinctive facial features, is prepared by extracting root k-th. Then, when Sign and Magnitude components along four different directions using the mean moment are constructed, a weighting approach according to the new variance is applied to each component. Finally, the weighted histograms of Sign and Magnitude components are concatenated to build a novel histogram of Complementary LBP along with different directions. A comprehensive evaluation using six public face datasets suggests that the present framework outperforms the state-of-the-art methods and achieves 98.51% for ORL, 98.72% for YALE, 98.83% for Caltech, 99.52% for AR, 94.78% for FERET, and 99.07% for KDEF in terms of accuracy, respectively. The influence of color spaces and the issue of degraded images are also analyzed with our descriptors. Such a result with theoretical underpinning confirms that our descriptors are robust against noise, illumination variation, diverse facial expressions, and head poses.



Technologies ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 31
Author(s):  
Md Manjurul Ahsan ◽  
Yueqing Li ◽  
Jing Zhang ◽  
Md Tanvir Ahad ◽  
Kishor Datta Gupta

Facial recognition (FR) in unconstrained weather is still challenging and surprisingly ignored by many researchers and practitioners over the past few decades. Therefore, this paper aims to evaluate the performance of three existing popular facial recognition methods considering different weather conditions. As a result, a new face dataset (Lamar University database (LUDB)) was developed that contains face images captured under various weather conditions such as foggy, cloudy, rainy, and sunny. Three very popular FR methods—Eigenface (EF), Fisherface (FF), and Local binary pattern histogram (LBPH)—were evaluated considering two other face datasets, AT&T and 5_Celebrity, along with LUDB in term of accuracy, precision, recall, and F1 score with 95% confidence interval (CI). Computational results show a significant difference among the three FR techniques in terms of overall time complexity and accuracy. LBPH outperforms the other two FR algorithms on both LUDB and 5_Celebrity datasets by achieving 40% and 95% accuracy, respectively. On the other hand, with minimum execution time of 1.37, 1.37, and 1.44 s per image on AT&T,5_Celebrity, and LUDB, respectively, Fisherface achieved the best result.



Sign in / Sign up

Export Citation Format

Share Document