scholarly journals AN EXPERIMENTAL STUDY OF FACE RECOGNITION METHOD

2020 ◽  
Vol 1 (2) ◽  
pp. 52-58
Author(s):  
Paula Pereira ◽  
Tanara Kuhn

The increased use of face recognition techniques leads to the development of improved methods with higher accuracy and efficiency. Currently, there are various face recognition techniques based on different algorithm. In this study, a new method of face recognition is proposed based on the idea of wavelet operators for creating spectral graph wavelet transformation. The proposed idea relies on the spectral graph wavelet kernel procedure. In this proposed method, feature extraction is based on transformation into SGWT by means of spatial domain. For recognition purpose, the feature vectors are used for computation of selected training samples which makes the classification. The decomposition of face image is done using the SGWT. The system identifies the test image by calculating the Euclidean distance. Finally, the study conducted an experiment using the ORL face database. The result states that the recognition accuracy is higher in the proposed system which can be further improved using the number of training images. Overall, the result shows that the proposed method has good performance in terms of accuracy of the face recognition

2019 ◽  
Author(s):  
Ziaul Haque Choudhury

A secure biometric passport in the field of personal identification for national security is proposed in this paper. This paper discusses about how to secure biometric passport by applying face recognition. Proper biometric features are unique for each individual and it is invariably in time, it is an unambiguous identifier of a person. But it may fail to authorize a person, if there are some changes in an applicant‘s appearance, such as a mustache, hair cut, and glasses, etc., the case of similar individuals like twins, siblings, similar faces or even doubles could head to individuality mismatch. Our proposed face recognition method is based on facial marks present in the face image to authenticate a person. We applied facial boundary detection purpose ASM (Active Shape Model) intoAAM (Active Appearance Model) using PCA (Principle Component Analysis). Facial marks are detected by applying Canny edge detector and HOG (Histogram Oriented Gradients). Experimental results reveal that our proposed method gives 94 percentage face recognition accuracy, using Indian face database from IIT, Kanpur.


Author(s):  
Shuai Liu ◽  
Yuanning Liu ◽  
Xiaodong Zhu ◽  
Jing Liu ◽  
Guang Huo ◽  
...  

In this paper, a two-stage multi-category recognition structure based on texture features is proposed. This method can solve the problem of the decline in recognition accuracy in the scene of lightweight training samples. Besides, the problem of recognition effect different in the same recognition structure caused by the unsteady iris can also be solved. In this paper’s structure, digitized values of the edge shape in the iris texture of the image are set as the texture trend feature, while the differences between the gray values of the image obtained by convolution are set as the grayscale difference feature. Furthermore, the texture trend feature is used in the first-stage recognition. The template category that does not match the tested iris is the elimination category, and the remaining categories are uncertain categories. Whereas, in the second-stage recognition, uncertain categories are adopted to determine the iris recognition conclusion through the grayscale difference feature. Then, the experiment results using the JLU iris library show that the method in this paper can be highly efficient in multi-category heterogeneous iris recognition under lightweight training samples and unsteady state.


2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Rong Wang

In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.


2019 ◽  
Vol 8 (1) ◽  
pp. 239-245 ◽  
Author(s):  
Shamsul J. Elias ◽  
Shahirah Mohamed Hatim ◽  
Nur Anisah Hassan ◽  
Lily Marlia Abd Latif ◽  
R. Badlishah Ahmad ◽  
...  

Attendance is important for university students. However, generic way of taking attendance in universities may include various problems. Hence, a face recognition system for attendance taking is one way to combat the problem. This paper will present an automated system that will automatically saves student’s attendance into the database using face recognition method. The paper will elaborate on student attendance system, image processing, face detection and face recognition. The face detection part will be done by using viola-jones algorithm method while the face recognition part will be carried on by using local binary pattern (LBP) method. The system will ensure that the attendance taking process will be faster and more accurate.


Author(s):  
Sangamesh Hosgurmath ◽  
Viswanatha Vanjre Mallappa ◽  
Nagaraj B. Patil ◽  
Vishwanath Petli

Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).


Author(s):  
Tang-Tang Yi ◽  

In order to solve the problem of low recognition accuracy in recognition of 3D face images collected by traditional sensors, a face recognition algorithm for 3D point cloud collected by mixed image sensors is proposed. The algorithm first uses the 3D wheelbase to expand the face image edge. According to the 3D wheelbase, the noise of extended image is detected, and median filtering is used to eliminate the detected noise. Secondly, the priority of the boundary pixels to recognize the face image in the denoising image recognition process is determined, and the key parts such as the illuminance line are analyzed, so that the recognition of the 3D point cloud face image is completed. Experiments show that the proposed algorithm improves the recognition accuracy of 3D face images, which recognition time is lower than that of the traditional algorithm by about 4 times, and the recognition efficiency is high.


2011 ◽  
Vol 20 (07) ◽  
pp. 1419-1439 ◽  
Author(s):  
STEVEN GILLAN ◽  
PANAJOTIS AGATHOKLIS

This paper presents a technique for face recognition that is based on image registration. The face recognition technique consists of three parts: a training part, an image registration part and a post-processing part. The image registration technique is based on finding a set of feature points in the two images and using these feature points for registration. This is done in four steps. In the first, images are filtered with the Mexican-hat wavelet to obtain the feature point locations. In the second, the Zernike moments of neighborhoods around the feature points are calculated and compared in the third step to establish correspondence between feature points in the two images. In the fourth, the transformation parameters between images are obtained using an iterative least squares technique to eliminate outliers.1,2 During training, a set of images are chosen as the training images and the Zernike moments for the feature points of the training images are obtained and stored. The choice of training images depends on the changes of poses and illumination that are expected. In the registration part, the transformation parameters to register the training images with the images under consideration are obtained. In the post-processing, these transformation parameters are used to determine whether a valid match is found or not. The performance of the proposed method is evaluated using various face databases3–5 and it is compared with the performance of existing techniques. Results indicate that the proposed technique gives excellent results for face recognition in conditions of varying pose, illumination, background and scale.


2018 ◽  
Vol 119 (9/10) ◽  
pp. 529-544 ◽  
Author(s):  
Ihab Zaqout ◽  
Mones Al-Hanjori

Purpose The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to automatically localize the face in the image and, if necessary, identify the person in the face. Interests in the procedures underlying the process of localization and individual’s recognition are quite significant in connection with the variety of their practical application in such areas as security systems, verification, forensic expertise, teleconferences, computer games, etc. This paper aims to recognize facial images efficiently. An averaged-feature based technique is proposed to reduce the dimensions of the multi-expression facial features. The classifier model is generated using a supervised learning algorithm called a back-propagation neural network (BPNN), implemented on a MatLab R2017. The recognition rate and accuracy of the proposed methodology is comparable with other methods such as the principle component analysis and linear discriminant analysis with the same data set. In total, 150 faces subjects are selected from the Olivetti Research Laboratory (ORL) data set, resulting 95.6 and 85 per cent recognition rate and accuracy, respectively, and 165 faces subjects from the Yale data set, resulting 95.5 and 84.4 per cent recognition rate and accuracy, respectively. Design/methodology/approach Averaged-feature based approach (dimension reduction) and BPNN (generate supervised classifier). Findings The recognition rate is 95.6 per cent and recognition accuracy is 85 per cent for the ORL data set, whereas the recognition rate is 95.5 per cent and recognition accuracy is 84.4 per cent for the Yale data set. Originality/value Averaged-feature based method.


Sign in / Sign up

Export Citation Format

Share Document