Multimodal Biometric Person Authentication Using Face, Ear and Periocular Region Based on Convolution Neural Networks

Author(s):  
M. S. Lohith ◽  
Yoga Suhas Kuruba Manjunath ◽  
M. N. Eshwarappa

Biometrics is an active area of research because of the increase in need for accurate person identification in numerous applications ranging from entertainment to security. Unimodal and multimodal are the well-known biometric methods. Unimodal biometrics uses one biometric modality of a person for person identification. The performance of an unimodal biometric system is degraded due to certain limitations such as: intra-class variations and nonuniversality. The person identification using more than one biometric modality of a person is multimodal biometrics. This method of identification has gained more interest due to resistance on spoof attacks and more recognition rate. Conventional methods of feature extraction have difficulty in engineering features that are liable to more variations such as illumination, pose and age variations. Feature extraction using convolution neural network (CNN) can overcome these difficulties because large dataset with robust variations can be used for training, where CNN can learn these variations. In this paper, we propose multimodal biometrics at feature level horizontal fusion using face, ear and periocular region biometric modalities and apply deep learning CNN for feature representation and also we propose face, ear and periocular region dataset that are robust to intra-class variations. The evaluation of the system is made by using proposed database. Accuracy, Precision, Recall and [Formula: see text] score are calculated to evaluate the performance of the system and had shown remarkable improvement over existing biometric system.

2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Sambit Bakshi ◽  
Pankaj K. Sa ◽  
Banshidhar Majhi

A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation.


Author(s):  
Hunny Mehrotra ◽  
Pratyush Mishra ◽  
Phalguni Gupta

In today’s high-speed world, millions of transactions occur every minute. For these transactions, data need to be readily available for the genuine people who want to have access, and it must be kept securely from imposters. Some methods of establishing a person’s identity are broadly classified into: 1. Something You Know: These systems are known as knowledge-based systems. Here the person is granted access to the system using a piece of information like a password, PIN, or your mother’s maiden name. 2. Something You Have: These systems are known as token-based systems. Here a person needs a token like a card key, smartcard, or token (like a Secure ID card). 3. Something You Are: These systems are known as inherited systems like biometrics. This refers to the use of behavioral and physiological characteristics to measure the identity of an individual. The third method of authentication is preferred over token-based and knowledge-based methods, as it cannot be misplaced, forgotten, stolen, or hacked, unlike other approaches. Biometrics is considered as one of the most reliable techniques for data security and access control. Among the traits used are fingerprints, hand geometry, handwriting, and face, iris, retinal, vein, and voice recognition. Biometrics features are the information extracted from biometric samples which can be used for comparison. In cases of face recognition, the feature set comprises detected landmark points like eye-to-nose distance, and distance between two eye points. Various feature extraction methods have been proposed, for example, methods using neural networks, Gabor filtering, and genetic algorithms. Among these different methods, a class of methods based on statistical approaches has recently received wide attention. In cases of fingerprint identification, the feature set comprises location and orientation of ridge endings and bifurcations, known as a minutiae matching approach (Hong, Wan, & Jain, 1998). Most iris recognition systems extract iris features using a bank of filters of many scales and orientation in the whole iris region. Palmprint recognition, just like fingerprint identification, is based on aggregate information presented in finger ridge impression. Like fingerprint identification, three main categories of palm matching techniques are minutiae-based matching, correlation-based matching, and ridge-based matching. The feature set for various traits may differ depending upon the extraction mechanism used. The system that uses a single trait for authenticity veri- fication is called unimodal biometric system. A unimodal biometric system (Ross & Jain, 2003) consists of three major modules: sensor module, feature extraction module, and matching module. However, even the best biometric traits face numerous problems like non-universality, susceptibility to biometric spoofing, and noisy input. Multimodal biometrics provides a solution to the above mentioned problems. A multimodal biometric system uses multiple sensors for data acquisition. This allows capturing multiple samples of a single biometric trait (called multi-sample biometrics) and/or samples of multiple biometric traits (called multi-source or multimodal biometrics). This approach also enables a user who does not possess a particular biometric identifier to still enroll and authenticate using other traits, thus eliminating the enrollment problems. Such systems, known as multimodal biometric systems (Tolba & Rezq, 2000), are expected to be more reliable due to the presence of multiple pieces of evidence. A good fusion technique is required to fuse information for such biometric systems.


Author(s):  
Manish M. Kayasth ◽  
Bharat C. Patel

The entire character recognition system is logically characterized into different sections like Scanning, Pre-processing, Classification, Processing, and Post-processing. In the targeted system, the scanned image is first passed through pre-processing modules then feature extraction, classification in order to achieve a high recognition rate. This paper describes mainly on Feature extraction and Classification technique. These are the methodologies which play an important role to identify offline handwritten characters specifically in Gujarati language. Feature extraction provides methods with the help of which characters can identify uniquely and with high degree of accuracy. Feature extraction helps to find the shape contained in the pattern. Several techniques are available for feature extraction and classification, however the selection of an appropriate technique based on its input decides the degree of accuracy of recognition. 


2019 ◽  
Vol 13 (2) ◽  
pp. 136-141 ◽  
Author(s):  
Abhisek Sethy ◽  
Prashanta Kumar Patra ◽  
Deepak Ranjan Nayak

Background: In the past decades, handwritten character recognition has received considerable attention from researchers across the globe because of its wide range of applications in daily life. From the literature, it has been observed that there is limited study on various handwritten Indian scripts and Odia is one of them. We revised some of the patents relating to handwritten character recognition. Methods: This paper deals with the development of an automatic recognition system for offline handwritten Odia character recognition. In this case, prior to feature extraction from images, preprocessing has been done on the character images. For feature extraction, first the gray level co-occurrence matrix (GLCM) is computed from all the sub-bands of two-dimensional discrete wavelet transform (2D DWT) and thereafter, feature descriptors such as energy, entropy, correlation, homogeneity, and contrast are calculated from GLCMs which are termed as the primary feature vector. In order to further reduce the feature space and generate more relevant features, principal component analysis (PCA) has been employed. Because of the several salient features of random forest (RF) and K- nearest neighbor (K-NN), they have become a significant choice in pattern classification tasks and therefore, both RF and K-NN are separately applied in this study for segregation of character images. Results: All the experiments were performed on a system having specification as windows 8, 64-bit operating system, and Intel (R) i7 – 4770 CPU @ 3.40 GHz. Simulations were conducted through Matlab2014a on a standard database named as NIT Rourkela Odia Database. Conclusion: The proposed system has been validated on a standard database. The simulation results based on 10-fold cross-validation scenario demonstrate that the proposed system earns better accuracy than the existing methods while requiring least number of features. The recognition rate using RF and K-NN classifier is found to be 94.6% and 96.4% respectively.


2014 ◽  
Vol 667 ◽  
pp. 260-263 ◽  
Author(s):  
Heng Chen ◽  
Ya Xia Liu

The identification of cashmere and wool fiber is one of the difficult problems in the textile industry. Three features including diameter, diameter shaft parameter and the density are extracted using MATLAB.Support vector machine is used to recognition the cashmere and wool fiber. The experiment shows that diameter is not a useful feature for recognition and density combined diameter shaft parameter is useful for differentiate them and recognition rate is 87.35%.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Chenchen Huang ◽  
Wei Gong ◽  
Wenlong Fu ◽  
Dongyu Feng

Feature extraction is a very important part in speech emotion recognition, and in allusion to feature extraction in speech emotion recognition problems, this paper proposed a new method of feature extraction, using DBNs in DNN to extract emotional features in speech signal automatically. By training a 5 layers depth DBNs, to extract speech emotion feature and incorporate multiple consecutive frames to form a high dimensional feature. The features after training in DBNs were the input of nonlinear SVM classifier, and finally speech emotion recognition multiple classifier system was achieved. The speech emotion recognition rate of the system reached 86.5%, which was 7% higher than the original method.


2014 ◽  
Vol 889-890 ◽  
pp. 1065-1068
Author(s):  
Yu’e Lin ◽  
Xing Zhu Liang ◽  
Hua Ping Zhou

In the recent years, the feature extraction algorithms based on manifold learning, which attempt to project the original data into a lower dimensional feature space by preserving the local neighborhood structure, have drawn much attention. Among them, the Marginal Fisher Analysis (MFA) achieved high performance for face recognition. However, MFA suffers from the small sample size problems and is still a linear technique. This paper develops a new nonlinear feature extraction algorithm, called Kernel Null Space Marginal Fisher Analysis (KNSMFA). KNSMFA based on a new optimization criterion is presented, which means that all the discriminant vectors can be calculated in the null space of the within-class scatter. KNSMFA not only exploits the nonlinear features but also overcomes the small sample size problems. Experimental results on ORL database indicate that the proposed method achieves higher recognition rate than the MFA method and some existing kernel feature extraction algorithms.


Sign in / Sign up

Export Citation Format

Share Document