Face Image Data Interchange Formats, Standardization

2009 ◽  
pp. 314-321
Author(s):  
Patrick Grother ◽  
Elham Tabassi
2015 ◽  
pp. 1250-1256
Author(s):  
Ted Tomonaga
Keyword(s):  

2017 ◽  
Vol 76 (24) ◽  
pp. 25983-26000 ◽  
Author(s):  
Jeong-Keun Park ◽  
Ho-Hyun Park ◽  
Jaehwa Park
Keyword(s):  

2020 ◽  
Author(s):  
Howard Martin ◽  
Suharjito

Abstract Face recognition has a lot of use on smartphone authentication, finding people, etc. Nowadays, face recognition with a constrained environment has achieved very good performance on accuracy. However, the accuracy of existing face recognition methods will gradually decrease when using a dataset with an unconstrained environment. Face image with an unconstrained environment is usually taken from a surveillance camera. In general, surveillance cameras will be placed on the corner of a room or even on the street. So, the image resolution will be low. Low-resolution image will cause the face very hard to be recognized and the accuracy will eventually decrease. That is the main reason why increasing the accuracy of the Low-Resolution Face Recognition (LRFR) problem is still challenging. This research aimed to solve the Low-Resolution Face Recognition (LRFR) problem. The datasets are YouTube Faces Database (YTF) and Labelled Faces in The Wild (LFW). In this research, face image resolution would be decreased using bicubic linear and became the low-resolution image data. Then super resolution methods as the preprocessing step would increase the image resolution. Super resolution methods used in this research are Super resolution GAN (SRGAN) [1] and Enhanced Super resolution GAN (ESRGAN) [2]. These methods would be compared to reach a better accuracy on solving LRFR problem. After increased the image resolution, the image would be recognized using FaceNet. This research concluded that using super resolution as the preprocessing step for LRFR problem has achieved a higher accuracy compared to [3]. The highest accuracy achieved by using ESRGAN as the preprocessing and FaceNet for face recognition with accuracy of 98.96 % and Validation rate 96.757 %.


10.2196/17234 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e17234 ◽  
Author(s):  
Bin Liang ◽  
Na Yang ◽  
Guosheng He ◽  
Peng Huang ◽  
Yong Yang

Background Cancer has become the second leading cause of death globally. Most cancer cases are due to genetic mutations, which affect metabolism and result in facial changes. Objective In this study, we aimed to identify the facial features of patients with cancer using the deep learning technique. Methods Images of faces of patients with cancer were collected to build the cancer face image data set. A face image data set of people without cancer was built by randomly selecting images from the publicly available MegaAge data set according to the sex and age distribution of the cancer face image data set. Each face image was preprocessed to obtain an upright centered face chip, following which the background was filtered out to exclude the effects of nonrelative factors. A residual neural network was constructed to classify cancer and noncancer cases. Transfer learning, minibatches, few epochs, L2 regulation, and random dropout training strategies were used to prevent overfitting. Moreover, guided gradient-weighted class activation mapping was used to reveal the relevant features. Results A total of 8124 face images of patients with cancer (men: n=3851, 47.4%; women: n=4273, 52.6%) were collected from January 2018 to January 2019. The ages of the patients ranged from 1 year to 70 years (median age 52 years). The average faces of both male and female patients with cancer displayed more obvious facial adiposity than the average faces of people without cancer, which was supported by a landmark comparison. When testing the data set, the training process was terminated after 5 epochs. The area under the receiver operating characteristic curve was 0.94, and the accuracy rate was 0.82. The main relative feature of cancer cases was facial skin, while the relative features of noncancer cases were extracted from the complementary face region. Conclusions In this study, we built a face data set of patients with cancer and constructed a deep learning model to classify the faces of people with and those without cancer. We found that facial skin and adiposity were closely related to the presence of cancer.


10.29007/tlhq ◽  
2020 ◽  
Author(s):  
Abdelwahed Nahli ◽  
Yuanzhouhan Cao ◽  
Shugong Xu

Nowadays remarkable progress has been observed in facial detection as a core part of computer vision. Nevertheless, motion blur still presents substantial challenges in face detection. The most recent face image deblurring methods make oversimplifying presumption and fail to restore the highly structured face shape/identity information. Therefore, we propose a data-driven based face image deblurring approach that foster facial detection and identity preservation. The proposed model includes two sequential data streams: Out of any supervision the first has been trained on real unlabeled clear/blurred data to generate a close realistic blurred image data during its inference. On the other hand, the generated labeled data has been exploited with by a second supervised learning-based data steam to learn the mapping function from blur domain to the clear one. We utilize the restored data to conduct an experimentation on face detection task. The experimental evaluation demonstrates the outperformance of our results and supports our system design and training strategy.


Sign in / Sign up

Export Citation Format

Share Document