Lighting-Effects Classification in Facial Images Using Wavelet Packets Transform

Author(s):  
A. Z. Kouzani ◽  
S. H. Ong

Faces often produce inconsistent features under different lighting conditions. Classifying lighting effects within a face image is therefore the first crucial step of building a lighting invariant face recognition system. This paper presents a hybrid system to classify face images based on the lighting effects present in the image. The theories of multivariate discriminant analysis and wavelet packets transform are utilised to develop the proposed system. An extensive set of face images of different poses, illuminated from different angles, are used to train the system. The performance of the proposed system is evaluated by conducting experiments on different test sets and by comparing its results against those of some existing counterparts.

Author(s):  
Prasad A. Jagdale ◽  
Sudeep D. Thepade

Nowadays the system which holds private and confidential data are being protected using biometric password such as finger recognition, voice recognition, eyries and face recognition. Face recognition match the current user face with faces present in the database of that security system and it has one major drawback that it never works better if it doesn’t have liveness detection. These face recognition system can be spoofed using various traits. Spoofing is accessing a system software or data by harming the biometric recognition security system. These biometric systems can be easily attacked by spoofs like peoples face images, masks and videos which are easily available from social media. The proposed work mainly focused on detecting the spoofing attack by training the system. Spoofing methods like photo, mask or video image can be easily identified by this method. This paper proposed a fusion technique where different features of an image are combining together so that it can give best accuracy in terms of distinguish between spoof and live face. Also a comparative study is done of machine learning classifiers to find out which classifiers gives best accuracy.


Author(s):  
Widodo Budiharto

The variation in illumination is one of the main challenging problem for face recognition. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals. Recognizing face reliably across changes in pose and illumination using PCA has proved to be a much harder problem because eigenfaces method comparing the intensity of the pixel. To solve this problem, this research proposes an online face recognition system using improved PCA for a service robot in indoor environment based on stereo vision. Tested images are improved by generating random values for varying the intensity of face images. A program for online training is also developed where the tested images are captured real-time from camera. Varying illumination in tested images will increase the accuracy using ITS face database which its accuracy is 95.5 %, higher than ATT face database’s as 95.4% and Indian face database’s as 72%. The results from this experiment are still evaluated to be improved in the future.


2019 ◽  
Vol 1 (1) ◽  
pp. 32-40
Author(s):  
Muhammad Noor Fatkhannudin ◽  
Adhi Prahara

Computer vision technology has been widely used in many applications and devices that involves biometric recognition. One of them is gender classification which has notable challenges when dealing with unique facial characteristics of human races. Not to mention the challenges from various poses of face and the lighting conditions. To perform gender classification, we resize and convert the face image into grayscale then extract its features using Fisherface. The features are reduced into 100 components using Principal Component Analysis (PCA) then classified into male and female category using linear Support Vector Machine (SVM). The test that conducted on 1014 face images from various human races resulted in 86% of accuracy using standard k-NN classifier while our proposed method shows better result with 88% of accuracy.


Author(s):  
Piyush Manish Sonar ◽  
Aniket Nitin Chaudhari ◽  
Mehul Deepak Sethi ◽  
Tejaswini Sanjay Gadakh

Face is the representation of one’s identity. Hence, we have proposed an automated student attendance system based on face recognition. Face recognition system is very useful in life applications especially for attendance system. In our proposed approach, firstly, video framing is performed by activating the camera through a user-friendly interface. In the pre-processing stage, scaling of the size of images is performed, if necessary, in order to prevent loss of information. In face recognition stage, enhanced local binary pattern (LBP) and principal component analysis (PCA) is applied correspondingly in order to extract the features from facial images. Another way of marking the attendance is fingerprint recognition. To mark the attendance students simply have to give the fingerprint impression in fingerprint scanner module. Finally, the attendance of the recognized student will be marked and saved in the excel file. The student who is not registered will also be able to register on the spot and notification will be given if students sign in more than once. Whenever seminar is completed then a link is sent on email. It includes the information in terms of feedback. When student fills the feedback form then analysis of overall session is done.


Author(s):  
Sangamesh Hosgurmath ◽  
Viswanatha Vanjre Mallappa ◽  
Nagaraj B. Patil ◽  
Vishwanath Petli

Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).


Author(s):  
Mr. Mubashshir ◽  
Kshitij Shinghal ◽  
Manas Singhal

Generally face recognition perform many operations in our daily life such as security purpose identification of people and verification purpose. The basic aim of my project is to design an effective and secure technique for authentication using face recognition that can search or recognize a human face among the thousands of persons and improve the performance of face recognition system in low light conditions and also evaluate the performance of the designed framework by comparing the performance of existing face recognition system. This study also provides a automatic system through which a given still image or video of a scene, identify one or more persons in this scene by using a stored database of facial images.


2019 ◽  
Vol 8 (3) ◽  
pp. 33
Author(s):  
Herman Kh. Omar ◽  
Nada E. Tawfiq

In the recent time bioinformatics take wide field in image processing. Face recognition which is basically the task of recognizing a person based on its facial image. It has become very popular in the last two decades, mainly because of the new methods developed and the high quality of the current visual instruments. There are different types of face recognition algorithms, and each method has a different approach to extract the image features and perform the matching with the input image. In this paper the Local Binary Patterns (LBP) was used, which is a particular case of the Texture Spectrum model, and powerful feature for texture classification. The face recognition system consists of recognizing the faces acquisition from a given data base via two phases. The most useful and unique features of the face image are extracted in the feature extraction phase. In the classification the face image is compared with the images from the database. The proposed algorithm for face recognition in this paper adopt the LBP features encode local texture information with default values. Apply histogram equalization and Resize the image into 80x60, divide it to five blocks, then Save every LBP feature as a vector table. Matlab R2019a was used to build the face recognition system. The Results which obtained are accurate and they are 98.8% overall (500 face image).


2021 ◽  
Vol 336 ◽  
pp. 06008
Author(s):  
Wen Xiao

Forensic face recognition (FFR) has been studied in recent years in forensic science. Given an automatic face recognition system, output scores of the system are used to describe the similarity of face image pairs, but not suitable for forensics. In this study, a score-mapping model based on kernel density estimation (KDE) and evidence theory is proposed. First, KDE was used to generate probability density function (PDF) for each dimensional feature vector of face image pairs. Then, the PDFs could be utilized to determine separately the basic probability assignment (BPA) of supporting the prosecution hypothesis and the defence hypothesis. Finally, the BPAs of each feature were combined by Dempster’s rule to get the final BPA, which reflects the strength of evidence support. The experimental results demonstrate that compared with the classic KDE-based likelihood ratio method, the proposed method has a better performance in terms of accuracy, sensitivity and specificity.


Sign in / Sign up

Export Citation Format

Share Document