scholarly journals Soft Biometrics and Deep Learning: Detecting Facial Soft Biometrics Features Using Ocular and Forehead Region for Masked Face Images

Author(s):  
Urja Banati ◽  
Vamika Prakash ◽  
Rashi Verma ◽  
Smriti Srivast

Abstract Soft Biometrics is a growing field that has been known to improve the recognition system as witnessed in the past decade. When combined with hard biometrics like iris, gait, fingerprint recognition etc. it has been seen that the efficiency of the system increases many folds. With the Pandemic came the need to recognise faces covered with mask in an efficient way- soft biometrics proved to be an aid in this. While recent advances in computer vision have helped in the estimation of age and gender - the system could be improved by extending the scope and detecting quite a few other soft biometric attributes that helps us in identifying a person, including but not limited to - eyeglasses, hair type and color, mustache, eyebrows etc. In this paper we propose a system of identification that uses the ocular and forehead part of the face as modalities to train our models that uses transfer learning techniques to help in the detection of 12 soft biometric attributes (FFHQ dataset) and 25 soft biometric attributes (CelebA dataset) for masked faces. We compare the results with the unmasked faces in order to see the variation of efficiency using these data-sets Throughout the paper we have implemented 4 enhanced models namely - enhanced Alexnet ,enhanced Resnet50, enhanced MobilenetV2 and enhanced SqueezeNet. The enhanced models apply transfer learning to the normal models and aids in improving accuracy. In the end we compare the results and see how the accuracy varies according to the model used and whether the images are masked or unmasked. We conclude that for images containing facial masks - using enhanced MobileNet would give a splendid accuracy of 92.5% (for FFHQ dataset) and 87% (for CelebA dataset).

Face recognition plays a vital role in security purpose. In recent years, the researchers have focused on the pose illumination, face recognition, etc,. The traditional methods of face recognition focus on Open CV’s fisher faces which results in analyzing the face expressions and attributes. Deep learning method used in this proposed system is Convolutional Neural Network (CNN). Proposed work includes the following modules: [1] Face Detection [2] Gender Recognition [3] Age Prediction. Thus the results obtained from this work prove that real time age and gender detection using CNN provides better accuracy results compared to other existing approaches.


Author(s):  
Vo Thi Ngoc Chau ◽  
Nguyen Hua Phung

Educational data clustering on the students’ data collected with a program can find several groups of the students sharing the similar characteristics in their behaviors and study performance. For some programs, it is not trivial for us to prepare enough data for the clustering task. Data shortage might then influence the effectiveness of the clustering process and thus, true clusters can not be discovered appropriately. On the other hand, there are other programs that have been well examined with much larger data sets available for the task. Therefore, it is wondered if we can exploit the larger data sets from other source programs to enhance the educational data clustering task on the smaller data sets from the target program. Thanks to transfer learning techniques, a transfer-learning-based clustering method is defined with the kernel k-means and spectral feature alignment algorithms in our paper as a solution to the educational data clustering task in such a context. Moreover, our method is optimized within a weighted feature space so that how much contribution of the larger source data sets to the clustering process can be automatically determined. This ability is the novelty of our proposed transfer learning-based clustering solution as compared to those in the existing works. Experimental results on several real data sets have shown that our method consistently outperforms the other methods using many various approaches with both external and internal validations.


Author(s):  
Sangamesh Hosgurmath ◽  
Viswanatha Vanjre Mallappa ◽  
Nagaraj B. Patil ◽  
Vishwanath Petli

Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).


2019 ◽  
Vol 8 (4) ◽  
pp. 6670-6674

Face Recognition is the most important part to identifying people in biometric system. It is the most usable biometric system. This paper focuses on human face recognition by calculating the facial features present in the image and recognizing the person using features. In every face recognition system follows the preprocessing, face detection techniques. In this paper mainly focused on Face detection and gender classification. They are performed in two stages, the first stage is face detection using an enhanced viola jones algorithm and the next stage is gender classification. Input to the video or surveillance that video converted into frames. Select few best frames from the video for detecting the face, before the particular image preprocessed using PSNR. After preprocessing face detection performed, and gender classification comparative analysis done by using a neural network classifier and LBP based classifier


Images generated from a variety of sources and foundations today can pose difficulty for a user to interpret similarity in them or analyze them for further use because of their segmentation policies. This unconventionality can generate many errors, because of which the previously used traditional methodologies such as supervised learning techniques less resourceful, which requires huge quantity of labelled training data which mirrors the desired target data. This paper thus puts forward the mechanism of an alternative technique i.e. transfer learning to be used in image diagnosis so that efficiency and accuracy among images can be achieved. This type of mechanism deals with variation in the desired and actual data used for training and the outlier sensitivity, which ultimately enhances the predictions by giving better results in various areas, thus leaving the traditional methodologies behind. The following analysis further discusses about three types of transfer classifiers which can be applied using only small volume of training data sets and their contrast with the traditional method which requires huge quantities of training data having attributes with slight changes. The three different separators were compared amongst them and also together from the traditional methodology being used for a very common application used in our daily life. Also, commonly occurring problems such as the outlier sensitivity problem were taken into consideration and measures were taken to recognise and improvise them. On further research it was observed that the performance of transfer learning exceeds that of the conventional supervised learning approaches being used for small amount of characteristic training data provided reducing the stratification errors to a great extent


2017 ◽  
Vol 17 (01) ◽  
pp. 1750005 ◽  
Author(s):  
Aruna Bhat

A methodology for makeup invariant robust face recognition based on features from accelerated segment test and Eigen vectors is proposed. Makeup and cosmetic changes in face have been a major cause of security breaches since long time. It is not only difficult for human eyes to catch an imposter but also an equally daunting task for a face recognition system to correctly identify an individual owing to changes brought about in face due to makeup. As a crucial pre-processing step, the face is first divided into various segments centered on the eyes, nose, lips and cheeks. FAST algorithm is then applied over the face images. The features thus derived from the facial image act as the fiducial points for that face. Thereafter principal component analysis is applied over the set of fiducial points in each segment of every face image present in the data sets in order to compute the Eigen vectors and the Eigen values. The resultant principal component which is the Eigen vector with the highest Eigen value yields the direction of the features in that segment. The principal components thus obtained using fiducial points generated from FAST in each segment of the test and the training data are compared in order to get the best match or no match.


Human facial images help to acquire the demographic information of the person like ethnicity and gender. At the same time, the ethnicity and gender acts as a significant part in the face-related applications. In this study, image-based ethnicity identification problem is considered as a classification problem and is solved by deep learning techniques. In this paper, a new multi-modal region based convolutional neural network (MM-RCNN) is proposed for the detection and classification of Ethnicity to determine the age, gender, emotion, ethnicity and so on. The presented model involves two stages namely feature extraction and classification. In the first stage, an efficient feature extraction model called ImageAnnot is developed for extracting the useful features from an image. In the second stage, MM-RCNN is employed to identify and then classify ethnicity. To validate the effective performance of the applied MM-RCNN model, various evaluation parameters has been presented and the simulation outcome verified the superior nature of the presented model compared to existing models.


This research is aimed to achieve high-precision accuracy and for face recognition system. Convolution Neural Network is one of the Deep Learning approaches and has demonstrated excellent performance in many fields, including image recognition of a large amount of training data (such as ImageNet). In fact, hardware limitations and insufficient training data-sets are the challenges of getting high performance. Therefore, in this work the Deep Transfer Learning method using AlexNet pre-trained CNN is proposed to improve the performance of the face-recognition system even for a smaller number of images. The transfer learning method is used to fine-tuning on the last layer of AlexNet CNN model for new classification tasks. The data augmentation (DA) technique also proposed to minimize the over-fitting problem during Deep transfer learning training and to improve accuracy. The results proved the improvement in over-fitting and in performance after using the data augmentation technique. All the experiments were tested on UTeMFD, GTFD, and CASIA-Face V5 small data-sets. As a result, the proposed system achieved a high accuracy as 100% on UTeMFD, 96.67% on GTFD, and 95.60% on CASIA-Face V5 in less than 0.05 seconds of recognition time.


A biometric identification system that audits the presence of a person using real or behavioral features is safer than passwords and number systems. Present applications are mostly recognize an individual using the single modal biometric system. However, a single characteristic sometimes fails to authenticate accurately. Multimodal biometric technologies solve the problems that exist in the single biometric systems. It is very hard to identify images with low lighting environments using facial recognition system. By utilizing fingerprint recognition, this issue can be better addressed. This paper presents a dual personnel authentication system that incorporates face and fingerprint to improve security. For face identification, the Discrete Wavelet Transform (DWT) algorithm is used to acquire features from the face and fingerprint pictures. The technique used to integrate fingerprint and face is decision level fusion. By adding fingerprint recognition to the scheme, the proposed algorithm decreases the false rejection rate (FRR) in the face and fingerprint recognition and hence increases the accuracy of the authentication.


2020 ◽  
Author(s):  
Ziaul Haque Choudhury

Biometrics is a rapidly developing technology, which has been broadly applied in forensics such as criminal identification, secured access, and prison security. The biometric technology is basically a pattern recognition system that acknowledges a person by finding out the legitimacy of a specific behavioral or physiological characteristic owned by that person. In this era, face is one of the commonly acceptable biometrics system which is used by humans in their visual interaction and authentication purpose. The challenges in the face recognition system arise from different issues concerned with cosmetic applied faces and of low quality images. In this thesis, we propose two novel techniques for extraction of facial features and recognition of faces when thick cosmetic is applied and of low quality images. In the face recognition technology, the facial marks identification method is one of the unique facial identification tasks using soft biometrics. Also facial marks information can enhance the face matching score to improve the face recognition performance. When faces are applied by thick cosmetics, some of the facial marks are invisible or hidden from their faces. In the literature, to detect the facial marks AAM (Active Appearance Model) and LoG (Laplacian of Gaussian) techniques are used. However, to the best of our knowledge, the methods related to the detection of facial marks are poor in performance especially when thick cosmetic is applied to the faces. A robust method is proposed to detect the facial marks such as tattoos, scars, freckles and moles etc. Initially the active appearance model (AAM) is applied for facial feature detection purpose. In addition to this prior model the Canny edge detector method is also applied to detect the facial mark edges. Finally SURF is used to detect the hidden facial marks which are covered by cosmetic items. It has been shown that the choice of this method gives high accuracy in facial marks detection of the cosmetic applied faces. Besides, another aspect of the face recognition based on low quality images is also studied. Face recognition indeed plays a major rule in the biometrics security environment. To provide secure authentication, a robust methodology for recognizing and authentication of the human face is required. However, there are numbers of difficulties in recognizing the human face and authentication of the person perfectly. The difficulty includes low quality of images due to sparse dark or light disturbances. To overcome such kind of problems, powerful algorithms are required to filter the images and detect the face and facial marks. This technique comprises extensively of detecting the different facial marks from that of low quality images which have salt and pepper noise in them. Initially (AMF) Adaptive Median Filter is applied to filter the images. The filtered images are then extracted to detect the primary facial feature using a powerful algorithm like Active Shape Model (ASM) into Active Appearance Model (AAM). Finally, the features are extracted using feature extractor algorithm Gradient Location Orientation Histogram (GLOH).Experimental results based on the CVL database and CMU PIE database with 1000 images of 1000 subjects and 2000 images of 2000 subjects show that the use of soft biometrics is able to improve face recognition performance. The results also showed that 93 percentage of accuracy is achieved. Second experiment is conducted with an Indian face database with 1000 images and results showed that 95 percentage of accuracy is achieved.


Sign in / Sign up

Export Citation Format

Share Document