Weighting Facial Features Extraction using Geometric Average

2001 ◽  
Vol 6 (1) ◽  
pp. 39-44
Author(s):  
Saparudin Saparudin

Human facial feature extraction is an important process in the face recognition system. The quality of the results from the extraction of human facial features is determined by the degree of accuracy. The weighting of human facial features is used to test the accuracy of the methods used. This research produces the process of weighting the facial features automatically. The results obtained are the same as those seen by the human eyes.  

2001 ◽  
Vol 6 (1) ◽  
pp. 39-44
Author(s):  
Saparudin Saparudin

Human facial feature extraction is an important process in the face recognition system. The quality of the results from the extraction of human facial features is determined by the degree of accuracy. The weighting of human facial features is used to test the accuracy of the methods used. This research produces the process of weighting the facial features automatically. The results obtained are the same as those seen by the human eyes.  


Author(s):  
CHING-WEN CHEN ◽  
CHUNG-LIN HUANG

This paper presents a face recognition system which can identify the unknown identity effectively using the front-view facial features. In front-view facial feature extractions, we can capture the contours of eyes and mouth by the deformable template model because of their analytically describable shapes. However, the shapes of eyebrows, nostrils and face are difficult to model using a deformable template. We extract them by using the active contour model (snake). After the contours of all facial features have been captured, we calculate effective feature values from these extracted contours and construct databases for unknown identities classification. In the database generation phase, 12 models are photographed, and feature vectors are calculated for each portrait. In the identification phase if any one of these 12 persons has his picture taken again, the system can recognize his identity.


2020 ◽  
Vol 10 (1) ◽  
pp. 259-269
Author(s):  
Akansha Singh ◽  
Surbhi Dewan

AbstractAssistive technology has proven to be one of the most significant inventions to aid people with Autism to improve the quality of their lives. In this study, a real-time emotion recognition system for autistic children has been developed. Emotion recognition is implemented by executing three stages: Face identification, Facial Feature extraction, and feature classification. The objective is to frame a system that includes all three stages of emotion recognition activity that executes expeditiously in real time. Thus, Affectiva SDK is implemented in the application. The propound system detects at most 7 facial emotions: anger, disgust, fear, joy, sadness, contempt, and surprise. The purpose for performing this study is to teach emotions to individuals suffering from autism, as they lack the ability to respond appropriately to others emotions. The proposed application was tested with a group of typical children aged 6–14 years, and positive outcomes were achieved.


2021 ◽  
Vol 39 (1B) ◽  
pp. 117-128
Author(s):  
Shaymaa M. Hamandi ◽  
Abdul Monem S. Rahma ◽  
Rehab F. Hassan

Robust facial feature extraction is an effective and important process for face recognition and identification system. The facial features should be invariant to scaling, translation, illumination and rotation, several feature extraction techniques may be used to increase the recognition accuracy. This paper inspects three-moment invariants techniques and then determines how is influenced by the variation which may happen to the various shapes of the face (globally and locally) Globally means the whole face shapes and locally means face part's shape (right eye, left eye, mouth, and nose). The proposed technique is tested using CARL database images. The proposal method of the new method that collects the robust features of each method is trained by a feed-forward neural network. The result has been improved and achieved an accuracy of 99.29%.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.


2014 ◽  
Vol 971-973 ◽  
pp. 1710-1713
Author(s):  
Wen Huan Wu ◽  
Ying Jun Zhao ◽  
Yong Fei Che

Face detection is the key point in automatic face recognition system. This paper introduces the face detection algorithm with a cascade of Adaboost classifiers and how to configure OpenCV in MCVS. Using OpenCV realized the face detection. And a detailed analysis of the face detection results is presented. Through experiment, we found that the method used in this article has a high accuracy rate and better real-time.


Author(s):  
Hady Pranoto ◽  
Oktaria Kusumawardani

The number of times students attend lectures has been identified as one of many success factors in the learning process in many studies. We proposed a framework of the student attendance system by using face recognition as authentication. Triplet loss embedding in FaceNet is suitable for face recognition systems because the architecture has high accuracy, quite lightweight, and easy to implement in the real-time face recognition system. In our research, triplet loss embedding shows good performance in terms of the ability to recognize faces. It can also be used for real-time face recognition for the authentication process in the attendance recording system that uses RFID. In our study, the performance for face recognition using k-NN and SVM classification methods achieved results of 96.2 +/- 0.1% and 95.2 +/- 0.1% accordingly. Attendance recording systems using face recognition as an authentication process will increase student attendance in lectures. The system should be difficult to be faked; the system will validate the user or student using RFID cards using facial biometric marks. Finally, students will always be present in lectures, which in turn will improve the quality of the existing education process. The outcome can be changed in the future by using a high-resolution camera. A face recognition system with facial expression recognition can be added to improve the authentication process. For better results, users are required to perform an expression instructed by face recognition using a database and the YOLO process.


Now a days one of the critical factors that affects the recognition performance of any face recognition system is partial occlusion. The paper addresses face recognition in the presence of sunglasses and scarf occlusion. The face recognition approach that we proposed, detects the face region that is not occluded and then uses this region to obtain the face recognition. To segment the occluded and non-occluded parts, adaptive Fuzzy C-Means Clustering is used and for recognition Minimum Cost Sub-Block Matching Distance(MCSBMD) are used. The input face image is divided in to number of sub blocks and each block is checked if occlusion present or not and only from non-occluded blocks MWLBP features are extracted and are used for classification. Experiment results shows our method is giving promising results when compared to the other conventional techniques.


Author(s):  
Dr.C K Gomathy ◽  
T. suneel ◽  
Y.Jeeevan Kumar Reddy

The Face recognition and image or video recognition are popular research topics in biometric technology. Real-time face recognition is an exciting field and a rapidly evolving issue. Key component analysis (PCA) may be a statistical technique collectively called correlational analysis . The goal of PCA is to scale back the massive amount of knowledge storage to the dimensions of the functional space required to render the face recognition system. The wide one-dimensional pixel vector generated from the two-dimensional image of the face and therefore the basic elements of the spatial function are designed for face recognition using PCA. this is often the projection of your own space. Sufficient space is decided by the brand. specialise in the eigenvectors of the covariance matrix of the fingerprint image collection. i'm building a camera-based real-time face recognition system and installing an algorithm. Use OpenCV, Haar Cascade, Eigen face, Fisher Face, LBPH and Python for program development.


2012 ◽  
Vol 241-244 ◽  
pp. 1705-1709
Author(s):  
Ching Tang Hsieh ◽  
Chia Shing Hu

In this paper, a robust and efficient face recognition system based on luminance distribution by using maximum likelihood estimation is proposed. The distribution of luminance components of the face region is acquired and applied to maximum likelihood test for face matching. The experimental results showed that the proposed method has a high recognition rate and requires less computation time.


Sign in / Sign up

Export Citation Format

Share Document