scholarly journals Sharpness and Brightness Quality Assessment of Face Images for Recognition

2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Ke Li ◽  
Hu Chen ◽  
Faxiu Huang ◽  
Shenggui Ling ◽  
Zhisheng You

Face image quality has an important effect on recognition performance. Recognition-oriented face image quality assessment is particularly necessary for the screening or application of face images with various qualities. In this work, sharpness and brightness were mainly assessed by a classification model. We selected very high-quality images of each subject and established nine kinds of quality labels that are related to recognition performance by utilizing a combination of face recognition algorithms, the human vision system, and a traditional brightness calculation method. Experiments were conducted on a custom dataset and the CMU multi-PIE face database for training and testing and on Labeled Faces in the Wild for cross-validation. The experimental results show that the proposed method can effectively reduce the false nonmatch rate by removing the low-quality face images identified by the classification model and vice versa. This method is even effective for face recognition algorithms that are not involved in label creation and whose training data are nonhomologous to the training set of our quality assessment model. The results show that the proposed method can distinguish images of different qualities with reasonable accuracy and is consistent with subjective human evaluation. The quality labels established in this paper are closely related to the recognition performance and exhibit good generalization to other recognition algorithms. Our method can be used to reject low-quality images to improve the recognition rate and screen high-quality images for subsequent processing.

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


Author(s):  
WEI-LI FANG ◽  
YING-KUEI YANG ◽  
JUNG-KUEI PAN

Several 2DPCA-based face recognition algorithms have been proposed hoping to achieve the goal of improving recognition rate while mostly at the expense of computation cost. In this paper, an approach named SI2DPCA is proposed to not only reduce the computation cost but also increase recognition performance at the same time. The approach divides a whole face image into smaller sub-images to increase the weight of features for better feature extraction. Meanwhile, the computation cost that mainly comes from the heavy and complicated operations against matrices is reduced due to the smaller size of sub-images. The reduced amount of computation has been analyzed and the integrity of sub-images has been discussed thoroughly in the paper. The experiments have been conducted to make comparisons among several better-known approaches and SI2DPCA. The experimental results have demonstrated that the proposed approach works well on reaching the goals of reducing computation cost and improving recognition performance simultaneously.


Author(s):  
Kholilatul Wardani ◽  
Aditya Kurniawan

 The ROI (Region of Interest) Image Quality Assessment is an image quality assessment model based on the SSI (Structural Similarity Index) index used in the specific image region desired to be assessed. Output assessmen value used by this image assessment model is 1 which means identical and -1 which means not identical. Assessment model of ROI Quality Assessment in this research is used to measure image quality on Kinect sensor capture result used in Mobile HD Robot after applied Multiple Localized Filtering Technique. The filter is applied to each capture sensor depth result on Kinect, with the aim to eliminate structural noise that occurs in the Kinect sensor. Assessment is done by comparing image quality before filter and after filter applied to certain region. The kinect sensor will be conditioned to capture a square black object measuring 10cm x 10cm perpendicular to a homogeneous background (white with RGB code 255,255,255). The results of kinect sensor data will be taken through EWRF 3022 by visual basic 6.0 program periodically 10 times each session with frequency 1 time per minute. The results of this trial show the same similar index (value 1: identical) in the luminance, contrast, and structural section of the edge region or edge region of the specimen. The value indicates that the Multiple Localized Filtering Technique applied to the noise generated by the Kinect sensor, based on the ROI Image Quality Assessment model has no effect on the image quality generated by the sensor.


2021 ◽  
pp. 1-15
Author(s):  
Yongjie Chu ◽  
Touqeer Ahmad ◽  
Lindu Zhao

Low-resolution face recognition with one-shot is a prevalent problem encountered in law enforcement, where it generally requires to recognize the low-resolution face images captured by surveillance cameras with the only one high-resolution profile face image in the database. The problem is very tough because the available samples is quite few and the quality of unknown images is quite low. To effectively address this issue, this paper proposes Adapted Discriminative Coupled Mappings (AdaDCM) approach, which integrates domain adaptation and discriminative learning. To achieve good domain adaptation performance for small size dataset, a new domain adaptation technique called Bidirectional Locality Matching-based Domain Adaptation (BLM-DA) is first developed. Then the proposed AdaDCM is formulated by unifying BLM-DA and discriminative coupled mappings into a single framework. AdaDCM is extensively evaluated on FERET, LFW, and SCface databases, which includes LR face images obtained in constrained, unconstrained, and real-world environment. The promising results on these datasets demonstrate the effectiveness of AdaDCM in LR face recognition with one-shot.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


Sign in / Sign up

Export Citation Format

Share Document