scholarly journals Face Images Classification using VGG-CNN

2021 ◽  
Vol 4 (1) ◽  
pp. 49
Author(s):  
I Nyoman Gede Arya Astawa ◽  
Made Leo Radhitya ◽  
I Wayan Raka Ardana ◽  
Felix Andika Dwiyanto

Image classification is a fundamental problem in computer vision. In facial recognition, image classification can speed up the training process and also significantly improve accuracy. The use of deep learning methods in facial recognition has been commonly used. One of them is the Convolutional Neural Network (CNN) method which has high accuracy. Furthermore, this study aims to combine CNN for facial recognition and VGG for the classification process. The process begins by input the face image. Then, the preprocessor feature extractor method is used for transfer learning. This study uses a VGG-face model as an optimization model of transfer learning with a pre-trained model architecture. Specifically, the features extracted from an image can be numeric vectors. The model will use this vector to describe specific features in an image.  The face image is divided into two, 17% of data test and 83% of data train. The result shows that the value of accuracy validation (val_accuracy), loss, and loss validation (val_loss) are excellent. However, the best training results are images produced from digital cameras with modified classifications. Val_accuracy's result of val_accuracy is very high (99.84%), not too far from the accuracy value (94.69%). Those slight differences indicate an excellent model, since if the difference is too much will causes underfit. Other than that, if the accuracy value is higher than the accuracy validation value, then it will cause an overfit. Likewise, in the loss and val_loss, the two values are val_loss (0.69%) and loss value (10.41%).

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhixue Liang

In the contactless delivery scenario, the self-pickup cabinet is an important terminal delivery device, and face recognition is one of the efficient ways to achieve contactless access express delivery. In order to effectively recognize face images under unrestricted environments, an unrestricted face recognition algorithm based on transfer learning is proposed in this study. First, the region extraction network of the faster RCNN algorithm is improved to improve the recognition speed of the algorithm. Then, the first transfer learning is applied between the large ImageNet dataset and the face image dataset under restricted conditions. The second transfer learning is applied between face image under restricted conditions and unrestricted face image datasets. Finally, the unrestricted face image is processed by the image enhancement algorithm to increase its similarity with the restricted face image, so that the second transfer learning can be carried out effectively. Experimental results show that the proposed algorithm has better recognition rate and recognition speed on the CASIA-WebFace dataset, FLW dataset, and MegaFace dataset.


Geosciences ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 336
Author(s):  
Rafael Pires de Lima ◽  
David Duarte

Convolutional neural networks (CNN) are currently the most widely used tool for the classification of images, especially if such images have large within- and small between- group variance. Thus, one of the main factors driving the development of CNN models is the creation of large, labelled computer vision datasets, some containing millions of images. Thanks to transfer learning, a technique that modifies a model trained on a primary task to execute a secondary task, the adaptation of CNN models trained on such large datasets has rapidly gained popularity in many fields of science, geosciences included. However, the trade-off between two main components of the transfer learning methodology for geoscience images is still unclear: the difference between the datasets used in the primary and secondary tasks; and the amount of available data for the primary task itself. We evaluate the performance of CNN models pretrained with different types of image datasets—specifically, dermatology, histology, and raw food—that are fine-tuned to the task of petrographic thin-section image classification. Results show that CNN models pretrained on ImageNet achieve higher accuracy due to the larger number of samples, as well as a larger variability in the samples in ImageNet compared to the other datasets evaluated.


Author(s):  
A. BELÉN MORENO ◽  
ÁNGEL SÁNCHEZ ◽  
ENRIQUE FRÍAS-MARTÍNEZ

Automatic face recognition is becoming increasingly important due to the security applications derived from it. Although the facial recognition problem has focused on 2D images, recently, due to the proliferation of 3D scanning hardware, 3D face recognition has become a feasible application. This 3D approach does not need any color information. In this way, it has the following main advantages in comparison to more traditional 2D approaches: (1) being robust under lighting variations and (2) providing more relevant information. In this paper we present a new 3D facial model based on the curvature properties of the surface. Our system is able to detect the subset of the characteristics of the face with higher discrimination power from a large set. The robustness of the model is tested by comparing recognition rates using both controlled and noncontrolled environments regarding facial expressions and facial rotations. The difference between the recognition rates of the two environments of only 5% proves that the model has a high degree of robustness against pose and facial expressions. We consider that this robustness is enough to implement facial recognition applications, which can achieve up to 91% correct recognition rate. A publish 3D face database containing face rotations and expressions has been created to achieve the recognition experiments.


Kursor ◽  
2018 ◽  
Vol 9 (2) ◽  
Author(s):  
Eva Y Puspaningrum ◽  
Budi Nugroho ◽  
Andri Istifariyanto

Facial recognition is one of the most popular issues in the field of pattern recognition.Face recognition with uncontrolled lighting conditions is more significant than thephysical characteristics of individual faces. Uncontrolled lighting from the right and leftcan affect the face image. A lot of research on facial recognition, but little attention givento the face image is symmetrical object. Several studies to explore and exploit thesymmetrical properties of the face for face recognition were performed. In this paper, wepropose a pre-processing method to solve one of the common problems in facial imageswith varying illumination. We utilize the symmetric property of the face then performedgamma correction then classified using Robust Regression. The results of this experimentgot an average accuracy of 94.31% and the proposed technique improves recognitionaccuracy especially in images with extreme lighting conditions using gamma correctionparameters γ = 0.3.


Author(s):  
Bambang Krismono Triwijoyo

The face is a challenging object to be recognized and analyzed automatically by a computer in many interesting applications such as facial gender classification. The large visual variations of faces, such as occlusions, pose changes, and extreme lightings, impose great challenge for these tasks in real world applications. This paper explained the fast transfer learning representations through use of convolutional neural network (CNN) model for gender classification from face image. Transfer learning aims to provide a framework to utilize previously-acquired knowledge to solve new but similar problems much more quickly and effectively. The experimental results showed that the transfer learning method have faster and higher accuracy than CNN network without transfer learning.


2021 ◽  
Vol 11 (16) ◽  
pp. 7310
Author(s):  
Hongxia Deng ◽  
Zijian Feng ◽  
Guanyu Qian ◽  
Xindong Lv ◽  
Haifang Li ◽  
...  

The world today is being hit by COVID-19. As opposed to fingerprints and ID cards, facial recognition technology can effectively prevent the spread of viruses in public places because it does not require contact with specific sensors. However, people also need to wear masks when entering public places, and masks will greatly affect the accuracy of facial recognition. Accurately performing facial recognition while people wear masks is a great challenge. In order to solve the problem of low facial recognition accuracy with mask wearers during the COVID-19 epidemic, we propose a masked-face recognition algorithm based on large margin cosine loss (MFCosface). Due to insufficient masked-face data for training, we designed a masked-face image generation algorithm based on the detection of the detection of key facial features. The face is detected and aligned through a multi-task cascaded convolutional network; and then we detect the key features of the face and select the mask template for coverage according to the positional information of the key features. Finally, we generate the corresponding masked-face image. Through analysis of the masked-face images, we found that triplet loss is not applicable to our datasets, because the results of online triplet selection contain fewer mask changes, making it difficult for the model to learn the relationship between mask occlusion and feature mapping. We use a large margin cosine loss as the loss function for training, which can map all the feature samples in a feature space with a smaller intra-class distance and a larger inter-class distance. In order to make the model pay more attention to the area that is not covered by the mask, we designed an Att-inception module that combines the Inception-Resnet module and the convolutional block attention module, which increases the weight of any unoccluded area in the feature map, thereby enlarging the unoccluded area’s contribution to the identification process. Experiments on several masked-face datasets have proved that our algorithm greatly improves the accuracy of masked-face recognition, and can accurately perform facial recognition with masked subjects.


2012 ◽  
Vol 198-199 ◽  
pp. 1383-1388
Author(s):  
Hong Hai Liu ◽  
Xiang Hua Hou

When extracting the face image features based on pixel distribution in face image, there always exist large amount of calculation and high dimensions of feature sector generated after feature extraction. This paper puts forward a feature extraction method based on prior knowledge of face and Haar feature. Firstly, the Haar feature expressions of face images are classified and the face features are decomposed into edge feature, line feature and center-surround feature, which are further concluded into the expressions of two rectangles, three rectangles and four rectangles. In addition, each rectangle varies in size. However, for face image combination, there exist too much redundancy and large calculation amount in this kind of expression. In order to solve the problem of large amount of calculation, the integral image is adopted to speed up the rectangle feature calculation. In addition, the thought based on classified trainer is adopted to reduce the redundancy expression. The results show that using face image of Haar feature expression can improve the speed and efficiency of recognition.


Author(s):  
Xiaolin Tang ◽  
Xiaogang Wang ◽  
Jin Hou ◽  
Huafeng Wu ◽  
Ping He

Introduction: Under complex illumination conditions such as poor light sources and light changes rapidly, there are two disadvantages of current gamma transform in preprocessing face image: one is that the parameters of transformation need to be set based on experience; the other is the details of the transformed image are not obvious enough. Objective: Improve the current gamma transform. Methods: This paper proposes a weighted fusion algorithm of adaptive gamma transform and edge feature extraction. First, this paper proposes an adaptive gamma transform algorithm for face image preprocessing, that is, the parameter of transformation generated by calculation according to the specific gray value of the input face image. Secondly, this paper uses Sobel edge detection operator to extract the edge information of the transformed image to get the edge detection image. Finally, this paper uses the adaptively transformed image and the edge detection image to obtain the final processing result through a weighted fusion algorithm. Results: The contrast of the face image after preprocessing is appropriate, and the details of the image are obvious. Conclusion: The method proposed in this paper can enhance the face image while retaining more face details, without human-computer interaction, and has lower computational complexity degree.


Sign in / Sign up

Export Citation Format

Share Document