scholarly journals 3D Face Image Inpainting with Generative Adversarial Nets

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.

Author(s):  
Tang-Tang Yi ◽  

In order to solve the problem of low recognition accuracy in recognition of 3D face images collected by traditional sensors, a face recognition algorithm for 3D point cloud collected by mixed image sensors is proposed. The algorithm first uses the 3D wheelbase to expand the face image edge. According to the 3D wheelbase, the noise of extended image is detected, and median filtering is used to eliminate the detected noise. Secondly, the priority of the boundary pixels to recognize the face image in the denoising image recognition process is determined, and the key parts such as the illuminance line are analyzed, so that the recognition of the 3D point cloud face image is completed. Experiments show that the proposed algorithm improves the recognition accuracy of 3D face images, which recognition time is lower than that of the traditional algorithm by about 4 times, and the recognition efficiency is high.


2018 ◽  
Vol 9 (1) ◽  
pp. 60-77 ◽  
Author(s):  
Souhir Sghaier ◽  
Wajdi Farhat ◽  
Chokri Souani

This manuscript presents an improved system research that can detect and recognize the person in 3D space automatically and without the interaction of the people's faces. This system is based not only on a quantum computation and measurements to extract the vector features in the phase of characterization but also on learning algorithm (using SVM) to classify and recognize the person. This research presents an improved technique for automatic 3D face recognition using anthropometric proportions and measurement to detect and extract the area of interest which is unaffected by facial expression. This approach is able to treat incomplete and noisy images and reject the non-facial areas automatically. Moreover, it can deal with the presence of holes in the meshed and textured 3D image. It is also stable against small translation and rotation of the face. All the experimental tests have been done with two 3D face datasets FRAV 3D and GAVAB. Therefore, the test's results of the proposed approach are promising because they showed that it is competitive comparable to similar approaches in terms of accuracy, robustness, and flexibility. It achieves a high recognition performance rate of 95.35% for faces with neutral and non-neutral expressions for the identification and 98.36% for the authentification with GAVAB and 100% with some gallery of FRAV 3D datasets.


Author(s):  
Stefano Berretti ◽  
Alberto Del Bimbo ◽  
Pietro Pala

In this paper, an original hybrid 2D-3D face recognition approach is proposed using two orthogonal face images, frontal and side views of the face, to reconstruct the complete 3D geometry of the face. This is obtained using a model based solution, in which a 3D template face model is morphed according to the correspondence of a limited set of control points identified on the frontal and side images in addition to the model. Control points identification is driven by an Active Shape Model applied to the frontal image, whereas subsequent manual assistance is required for control points localization on the side view. The reconstructed 3D model is finally matched, using the iso-geodesic regions approach against a gallery of 3D face scans for the purpose of face recognition. Preliminary experimental results are provided on a small database showing the viability of the approach.


Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 190
Author(s):  
Zuodong Niu ◽  
Handong Li ◽  
Yao Li ◽  
Yingjie Mei ◽  
Jing Yang

Face image inpainting technology is an important research direction in image restoration. When the current image restoration methods repair the damaged areas of face images with weak texture, there are problems such as low accuracy of face image decomposition, unreasonable restoration structure, and degradation of image quality after inpainting. Therefore, this paper proposes an adaptive face image inpainting algorithm based on feature symmetry. Firstly, we locate the feature points of the face, and segment the face into four feature parts based on the feature point distribution to define the feature search range. Then, we construct a new mathematical model, introduce feature symmetry to improve priority calculation, and increase the reliability of priority calculation. After that, in the process of searching for matching blocks, we accurately locate similar feature blocks according to the relative position and symmetry criteria of the target block and various feature parts of the face. Finally, we introduced the HSV (Hue, Saturation, Value) color space to determine the best matching block according to the chroma and brightness of the sample, reduce the repair error, and complete the face image inpainting. During the experiment, we firstly performed visual evaluation and texture analysis on the inpainting face image, and the results show that the face image inpainting by our algorithm maintained the consistency of the face structure, and the visual observation was closer to the real face features. Then, we used the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as objective evaluation indicators; among the five sample face images inpainting results given in this paper, our method was better than the reference methods, and the average PSNR value improved from 2.881–5.776 dB using our method when inpainting 100 face images. Additionally, we used the time required for inpainting the unit pixel to evaluate the inpainting efficiency, and it was improved by 12%–49% with our method when inpainting 100 face images. Finally, by comparing the face image inpainting experiments with the generative adversary network (GAN) algorithm, we discuss some of the problems with the method in this paper based on graphics in repairing face images with large areas of missing features.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhixue Liang

In the contactless delivery scenario, the self-pickup cabinet is an important terminal delivery device, and face recognition is one of the efficient ways to achieve contactless access express delivery. In order to effectively recognize face images under unrestricted environments, an unrestricted face recognition algorithm based on transfer learning is proposed in this study. First, the region extraction network of the faster RCNN algorithm is improved to improve the recognition speed of the algorithm. Then, the first transfer learning is applied between the large ImageNet dataset and the face image dataset under restricted conditions. The second transfer learning is applied between face image under restricted conditions and unrestricted face image datasets. Finally, the unrestricted face image is processed by the image enhancement algorithm to increase its similarity with the restricted face image, so that the second transfer learning can be carried out effectively. Experimental results show that the proposed algorithm has better recognition rate and recognition speed on the CASIA-WebFace dataset, FLW dataset, and MegaFace dataset.


Author(s):  
Dat Chu ◽  
Shishir Shah ◽  
Ioannis A. Kakadiaris

Performing face recognition under extreme poses and lighting conditions remains a challenging task for current state-of-the-art biometric algorithms. The recognition task is even more challenging when there is insufficient training data available in the gallery, or when the gallery dataset originates from one side of the face while the probe dataset originates from the other. The authors present a new method for computing the distance between two biometric signatures acquired under such challenging conditions. This method improves upon an existing Semi-Coupled Dictionary Learning method by computing a jointly-optimized solution that incorporates the reconstruction cost, the discrimination cost, and the semi-coupling cost. The use of a semi-coupling term allows the method to handle partial 3D face meshes where, for example, only the left side of the face is available for gallery and the right side of the face is available for probe. The method also extends to 2D signatures under varying poses and lighting changes by using 3D signatures as a coupling term. The experiments show that this method can improve recognition performance of existing state-of-the-art wavelet signatures used in 3D face recognition and provide excellent recognition results in the 3D-2D face recognition application.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Kun Sun ◽  
Xin Yin ◽  
Mingxin Yang ◽  
Yang Wang ◽  
Jianying Fan

At present, the face recognition method based on deep belief network (DBN) has advantages of automatically learning the abstract information of face images and being affected slightly by active factors, so it becomes the main method in the face recognition area. Because DBN ignores the local information of face images, the face recognition rate based on DBN is badly affected. To solve this problem, a face recognition method based on center-symmetric local binary pattern (CS-LBP) and DBN (FRMCD) is proposed in this paper. Firstly, the face image is divided into several subblocks. Secondly, CS-LBP is used to extract texture features of each image subblock. Thirdly, texture feature histograms are formed and input into the DBN visual layer. Finally, face classification and face recognition are completed through deep learning in DBN. Through the experiments on face databases ORL, Extend Yale B, and CMU-PIE by the proposed method (FRMCD), the best partitioning way of the face image and the hidden unit number of the DBN hidden layer are obtained. Then, comparative experiments between the FRMCD and traditional methods are performed. The results show that the recognition rate of FRMCD is superior to those of traditional methods; the highest recognition rate is up to 98.82%. When the number of training samples is less, the FRMCD has more significant advantages. Compared with the method based on local binary pattern (LBP) and DBN, the time-consuming of FRMCD is shorter.


Author(s):  
CHING-LIANG SU

Generally, for face recognition, image shift and rotation problems must be addressed. The "ring rotation invariant transform" technique is used to transfer geometrical features of face image to other more salient ones; by which one can identify whether a sample or unknown image is the identical image. It also can solve image rotation problem. To deal with the image-shifting problem, this study uses one pixel inside a sample image to compare with the corresponding pixels in the unknown image to locate the closest matching point. In this study, three different kinds of extracted ring signals are generated, which are (1) ring-radius-31, (2) ring-radius-22, and (3) ring-radius-13. These signals are used to generate the rotation invariant magnitudes and several magnitudes are combined as one entity and, subsequently, saved inside one specific corresponding pixel in the BMP file. By this approach, one pixel will possess more geometrical-features of the face images; one entity in sample image is compared with entities inside the corresponding radius-6-cake area of the unknown image to locate the closest matching point.


Author(s):  
Isnawati Muslihah ◽  
Muqorobin Muqorobin

Face recognition is an identification system that uses the characteristics of a person's face for processing. There is a feature in the face image so that it can be distinguished between one face and another face. One way to recognize face images is to analyze the texture of the face image. Texture analysis generally requires a feature extraction process. In different images, the characteristics will also differ. This characteristic will be the basis for the recognition of facial images. However, existing face recognition methods experience efficiency problems and rely heavily on the extraction of the right features. This study aims to study the texture characteristics of the extraction results using the Local Binary Pattern (LBP) method which is applied to deal with the introduction of Probabilistic Linear Discriminant Analysis (PLDA). The data used in this study are human face images from the AR Faces database, consisting of 136 objects (76 men and 60 women), each of which has 7 types of images Based on the results of testing shows the LBP method can produce the highest accuracy with a value of 95.53% in the introduction of PLDA.


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Ke Li ◽  
Hu Chen ◽  
Faxiu Huang ◽  
Shenggui Ling ◽  
Zhisheng You

Face image quality has an important effect on recognition performance. Recognition-oriented face image quality assessment is particularly necessary for the screening or application of face images with various qualities. In this work, sharpness and brightness were mainly assessed by a classification model. We selected very high-quality images of each subject and established nine kinds of quality labels that are related to recognition performance by utilizing a combination of face recognition algorithms, the human vision system, and a traditional brightness calculation method. Experiments were conducted on a custom dataset and the CMU multi-PIE face database for training and testing and on Labeled Faces in the Wild for cross-validation. The experimental results show that the proposed method can effectively reduce the false nonmatch rate by removing the low-quality face images identified by the classification model and vice versa. This method is even effective for face recognition algorithms that are not involved in label creation and whose training data are nonhomologous to the training set of our quality assessment model. The results show that the proposed method can distinguish images of different qualities with reasonable accuracy and is consistent with subjective human evaluation. The quality labels established in this paper are closely related to the recognition performance and exhibit good generalization to other recognition algorithms. Our method can be used to reject low-quality images to improve the recognition rate and screen high-quality images for subsequent processing.


Sign in / Sign up

Export Citation Format

Share Document