scholarly journals Identify Attractive and Unattractive Individuals Based on Geometric Features Using Neural Network

2021 ◽  
Vol 38 (4) ◽  
pp. 1007-1012
Author(s):  
Shakiba Ahmadimehr ◽  
Mohammad Karimi Moridani

This paper aims to explore the essence of facial attractiveness from the viewpoint of geometric features toward the classification and identification of attractive and unattractive individuals. We present a simple but useful feature extraction for facial beauty classification. Evaluation of facial attractiveness was performed with different combinations of geometric facial features using the deep learning method. In this method, we focus on the geometry of a face and use actual faces for our analysis. The proposed method has been tested on, image database containing 60 images of men's faces (attractive or unattractive) ranging from 20-50 years old. The images are taken from both frontal and lateral position. In the next step, principle components analysis (PCA) was applied to feature a reduction of beauty, and finally, the neural network was used for judging whether the obtained analysis of various faces is attractive or not. The results show that one of the indexes in identifying facial attractiveness base of science, is the values of the geometric features in the face, changing facial parameters can change the face from unattractive to attractive and vice versa. The experimental results are based on 60 facial images, high accuracy of 88%, and Sensitivity of 92% is obtained for 2-level classification (attractive or not).

2019 ◽  
Vol 15 (1) ◽  
Author(s):  
Archana Harsing Sable ◽  
Sanjay N. Talbar

Abstract Numerous algorithms have met complexity in recognizing the face, which is invariant to plastic surgery, owing to the texture variations in the skin. Though plastic surgery serves to be a challenging issue in the domain of face recognition, the concerned theme has to be restudied for its hypothetical and experimental perspectives. In this paper, Adaptive Gradient Location and Orientation Histogram (AGLOH)-based feature extraction is proposed to accomplish effective plastic surgery face recognition. The proposed features are extracted from the granular space of the faces. Additionally, the variants of the local binary pattern are also extracted to accompany the AGLOH features. Subsequently, the feature dimensionality is reduced using principal component analysis (PCA) to train the artificial neural network. The paper trains the neural network using particle swarm optimization, despite utilizing the traditional learning algorithms. The experimentation involved 452 plastic surgery faces from blepharoplasty, brow lift, liposhaving, malar augmentation, mentoplasty, otoplasty, rhinoplasty, rhytidectomy and skin peeling. Finally, the proposed AGLOH proves its performance dominance.


Connectivity ◽  
2020 ◽  
Vol 145 (3) ◽  
Author(s):  
V. S. Orlenko ◽  
◽  
I. I. Kolosinsʹkyy

The article deals with the technical side of face recognition — the neural network. The advantages of the neural network for identification of the person are substantiated, the stages of comparison of two images are considered. The first step is defined as the face search in the photo. Using several tests, the best neural network was identified, which allowed to effectively obtain a normalized image of a person’s face. The second step is to find the features of the person, for which the comparative analysis is performed. It was this stage that became the main point in this article — 16 sets of tests were carried out, each test set has 12 tests inside. Two large datasets were used for the study to evaluate the effectiveness of the algorithms not only in ideal circumstances but also in the field. The results of the study allowed us to determine the best method and neural model for finding a face and dividing it into parts. It is determined which part of the face the algorithm recognizes best — it will allow making adjustments to the location of the camera.


Author(s):  
C. K. Tan ◽  
S. J. Wilcox ◽  
J. Ward

A series of experiments on two different coals at a range of burner conditions have been conducted to investigate the behaviour of pf coal combustion on a 150kW pulverised fuel (pf) coal burner with a simulated eyebrow (a growth of slag in the near burner region). The simulation of a burner eyebrow was achieved by inserting an annulus of refractory material immediately in front of the face of the original burner quarl. Results obtained from monitoring the infrared (IR) radiation and sound emitted by the flame were processed into a number of features which were then used to train and test a self organising map neural network. Results obtained from the neural network demonstrated a classification success, never lower than 99.3%, indicate that it is not only possible to detect the presence of an eyebrow by monitoring the flame, but it is also possible to give an indication as to its size, over a reasonably large range of conditions.


2020 ◽  
Vol 10 (24) ◽  
pp. 8940
Author(s):  
Wanshun Gao ◽  
Xi Zhao ◽  
Jianhua Zou

Face recognition under drastic pose drops rapidly due to the limited samples during the model training. In this paper, we propose a pose-autoaugment face recognition framework (PAFR) based on the training of a Convolutional Neural Network (CNN) with multi-view face augmentation. The proposed framework consists of three parts: face augmentation, CNN training, and face matching. The face augmentation part is composed of pose autoaugment and background appending for increasing the pose variations of each subject. In the second part, we train a CNN model with the generated facial images to enhance the pose-invariant feature extraction. In the third part, we concatenate the feature vectors of each face and its horizontally flipped face from the trained CNN model to obtain a robust feature. The correlation score between the two faces is computed by the cosine similarity of their robust features. Comparable experiments are demonstrated on Bosphorus and CASIA-3D databases.


2006 ◽  
Vol 18 (1) ◽  
pp. 119-142 ◽  
Author(s):  
Yael Eisenthal ◽  
Gideon Dror ◽  
Eytan Ruppin

This work presents a novel study of the notion of facial attractiveness in a machine learning context. To this end, we collected human beauty ratings for data sets of facial images and used various techniques for learning the attractiveness of a face. The trained predictor achieves a significant correlation of 0.65 with the average human ratings. The results clearly show that facial beauty is a universal concept that a machine can learn. Analysis of the accuracy of the beauty prediction machine as a function of the size of the training data indicates that a machine producing human-like attractiveness rating could be obtained given a moderately larger data set.


2020 ◽  
Author(s):  
Song Tong ◽  
Xuefeng Liang ◽  
Takatsune Kumada ◽  
Sunao Iwaki

Empirical evidence has shown that there is an ideal arrangement of facial features (ideal ratios) that can optimize the attractiveness of a person's face. These putative ratios define facial attractiveness in terms of spatial relations and provide important rules for measuring the attractiveness of a face. In this paper, we show that a deep neural network (DNN) model can learn putative ratios based only on categorical annotation when no annotated facial features for attractiveness are explicitly given. To this end, we conducted three experiments. In Experiment 1, we trained a DNN model to recognize facial attractiveness using four category-specific neurons (female/male $\times$ high/low attractiveness). In Experiment 2, face-like images were generated by reversing the DNN model (e.g., deconvolution). These images depict the intuitive attributes of the four categories of facial attractiveness and reveal certain consistencies with reported evidence on the putative ratios of facial attractiveness. In Experiment 3, simulated psychophysical experiments on facial images with varying ratios of features reveal changes in the activity of the category-specific neurons that are remarkably similar to those of human judgements reported in a previous study. These results show that the trained DNN model can learn putative ratios as key features for the representation of facial attractiveness. These findings advance our understanding of facial attractiveness and high-level human perception.


2021 ◽  
Vol 9 (17) ◽  
pp. 111-120
Author(s):  
Hugo Andrade Carrera ◽  
Soraya Sinche Maita ◽  
Pablo Hidalgo Lascano

Since Covid-19 appeared, the world has entered into a new stage, in which everybody is trying to mitigate the effects of the virus. The mandatory use of face masks in public places and when maintaining contact with people outside the family circle is one of mandatory measures that many countries have implemented, such as Ecuador, thus, the purpose of this article is to develop a convolutional neural network model using TensorFlow based on MobileNetV2, that allows to perform mask detection in real time video with the key feature of determining if the person is using the face mask properly or if it is not wearing a mask, in order to use the model with OpenCV and a pretrained neural network that detects faces. In addition, the performance metrics of the neural network are analyzed, including precision, accuracy, recall and the F1 score. All performance metrics consider the number of epochs for the training process, obtaining as a result a model that classifies between three groups: faces without face mask, faces wearing a face mask improperly and faces wearing a mask properly. with a great performance in all metrics; The results show values greater than 85% for precision, recall and F1 score, and accuracy values between 93% for 5 epochs and 95% for 25 epochs.


Author(s):  
A. A. Kulikov

Currently, methods for recognizing objects in images work poorly and use intellectually unsatisfactory methods. The existing identification systems and methods do not completely solve the problem of identification, namely, identification in difficult conditions: interference, lighting, various changes on the face, etc. To solve these problems, a local detector for a reprint model of an object in an image was developed and described. A transforming autocoder (TA), a model of a neural network, was developed for the local detector. This neural network model is a subspecies of the general class of neural networks of reduced dimension. The local detector is able, in addition to determining the modified object, to determine the original shape of the object as well. A special feature of TA is the representation of image sections in a compact form and the evaluation of the parameters of the affine transformation. The transforming autocoder is a heterogeneous network (HS) consisting of a set of networks of smaller dimension. These networks are called capsules. Artificial neural networks should use local capsules that perform some rather complex internal calculations on their inputs, and then encapsulate the results of these calculations in a small vector of highly informative outputs. Each capsule learns to recognize an implicitly defined visual object in a limited area of viewing conditions and deformations. It outputs both the probability that the object is present in its limited area and a set of “instance parameters” that can include the exact pose, lighting, and deformation of the visual object relative to an implicitly defined canonical version of this object. The main advantage of capsules that output instance parameters is a simple way to recognize entire objects by recognizing their parts. The capsule can learn to display the pose of its visual object in a vector that is linearly related to the “natural” representations of the pose that are used in computer graphics. There is a simple and highly selective test for whether visual objects represented by two active capsules A and B have the correct spatial relationships for activating a higher-level capsule C. The transforming autoencoder solves the problem of identifying facial images in conditions of interference (noise), changes in illumination and angle.


2020 ◽  
Vol 8 (6) ◽  
pp. 5069-5073

Deeplearning has been used to solve complex problems in various domains. As it advances, it also creates applications which become a major threat to our privacy, security and even to our Democracy. Such an application which is being developed recently is the "Deepfake". Deepfake models can create fake images and videos that humans cannot differentiate them from the genuine ones. Therefore, the counter application to automatically detect and analyze the digital visual media is necessary in today world. This paper details retraining the image classification models to apprehend the features from each deepfake video frames. After feeding different sets of deepfake clips of video fringes through a pretrained layer of bottleneck in the neural network is made for every video frame, already stated layer contains condense data for all images and exposes artificial manipulations in Deepfake videos. When checking Deepfake videos, this technique received more than 87 per cent accuracy. This technique has been tested on the Face Forensics dataset and obtained good accuracy in detection.


2014 ◽  
Vol 998-999 ◽  
pp. 869-872
Author(s):  
Na Li ◽  
Peng He ◽  
Qian Zhao

In the course of the face feature match, many classifiers have been designed. The neural network is usually selected as a classifier because of its validity and universality, whereas its training time, training epochs, and its convergence, all are not satisfied to us. It is often influenced by the author’s experience. In the case, a collaborative genetic algorithm and neural network is presented as a new face recognition classifier. The one thing is to train the NN weights by the GA until the stopping criterion is met, and the next thing is to use the BP algorithm to continue to train the network. The training time and training epochs have been improved in the experiment of the face recognition on ORL face database. The simulation shows the validity of methods.


Sign in / Sign up

Export Citation Format

Share Document