face database
Recently Published Documents


TOTAL DOCUMENTS

188
(FIVE YEARS 40)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Toshiki Saito ◽  
Steven Almaraz ◽  
Kurt Hugenberg

Past research has demonstrated a link between facial expressions and mind perception, yet why expressions, especially happy expressions, influence mind attribution remains unclear. Conducting four studies, we addressed this issue. In Study 1, we investigated whether the valence or behavioral intention (i.e., approach or avoidance) implied by different emotions affected the minds ascribed to expressers. Happy (positive valence and approach intention) targets were ascribed more sophisticated minds than were targets displaying neutral, angry (negative-approach), or fearful (negative-avoidance) expressions, suggesting emotional valence was relevant to mind attribution but apparent behavioral intentions were not. We replicated this effect using both Black and White targets (Study 2) and another face database (Study 3). In Study 4, we conducted path analyses to examine attractiveness and expectations of social acceptance as potential mediators of the effect. Our findings suggest that signals of social acceptance are crucial to the effect emotional expressions have on mind perception.


2021 ◽  
Vol 5 (4) ◽  
pp. 420
Author(s):  
Budi Tri Utomo ◽  
Iskandar Fitri ◽  
Eri Mardiani

In the era of big data, the biometric identification process is growing very fast and is increasingly being implemented in many applications. Face recognition technology utilizes artificial intelligence (AI) to recognize faces that are already stored in the database. In this research, it is proposed to design an online academic login system at the National University using real time face recognition used OpenCV with the Local Binary Pattern Histogram algorithm and the Haar Cassade Classification method. The system will detect, recognize and compare faces with the stored face database. The image used is 480 x 680 pixels with a .jpg extension in the form of an RGB image which will be converted into a Grayscale image., to make it easier to calculate the histogram value of each face that will be recognized. With a modeling system like this it is hope to make it easy for user to log into online academics.Keywords:Face Recognition, Haar Cascade Clasifier, Local Binary Pattern Histogram, Online Akademic, OpenCV. 


2021 ◽  
Vol 21 (9) ◽  
pp. 2964
Author(s):  
Mahnoor Nadeem ◽  
Nihan Alp ◽  
Yagmur Damla Sentürk
Keyword(s):  

2021 ◽  
Vol 11 (14) ◽  
pp. 6303
Author(s):  
Ismahane Cheheb ◽  
Noor Al-Maadeed ◽  
Ahmed Bouridane ◽  
Azeddine Beghdadi ◽  
Richard Jiang

While there has been a massive increase in research into face recognition, it remains a challenging problem due to conditions present in real life. This paper focuses on the inherently present issue of partial occlusion distortions in real face recognition applications. We propose an approach to tackle this problem. First, face images are divided into multiple patches before local descriptors of Local Binary Patterns and Histograms of Oriented Gradients are applied on each patch. Next, the resulting histograms are concatenated, and their dimensionality is then reduced using Kernel Principle Component Analysis. Once completed, patches are randomly selected using the concept of random sampling to finally construct several sub-Support Vector Machine classifiers. The results obtained from these sub-classifiers are combined to generate the final recognition outcome. Experimental results based on the AR face database and the Extended Yale B database show the effectiveness of our proposed technique.


2021 ◽  
Author(s):  
Md. Obaidul Malek

The principal challenge in biometric authentication is to mitigate the effects of any noise while extracting biometric features for biometric template generation. Most biometric systems are developed under the assumption that the extracted biometrics and the nature of their associated interferences are linear, stationary, and homogeneous. When these assumptions are violated due to nonlinear, nonstationary, and heterogeneous noise, the authentication performance deteriorates. As well, demands for biometric templates are on the rise in the field of information technology, leading to an increase in the vulnerability of stored and dynamic information. Thus, the development of a sophisticated authentication and encryption method is necessary to address these challenges. This dissertation proposes a new Sequential Subspace Estimator (SSE) algorithm for biometric authentication. In the proposed method, a sequential estimator is being designed in the image subspace that addresses challenges arising from nonlinear, nonstationary, and heterogeneous noise. The proposed method includes a subspace technique that overcomes the computational complexity associated with the sequential estimator. In addition, it includes a novel MultiBiometrics encryption algorithm that protects the biometric templates against security, privacy, and unlinkability attacks. Unlike current biometric encryption, this method uses cryptographic keys in conjunction with extracted MultiBiometrics to create cryptographic bonds, called “BioCryptoBond”. To further enhance system security and improve authentication accuracy, the development of a biometric database management system is also being considered. The proposed method is being tested on images from three public databases: the “Put Face Database”, the “Indian Face Database”, and the “CASIA Fingerprint Image Database Version 5.1”. The performance of the proposed solution has been evaluated using the Equal Error Rate (EER) and Correct Recognition Rate (CRR). The experimental results demonstrate the superiority of the proposed method in comparison to its counterparts.


2021 ◽  
Author(s):  
Md. Obaidul Malek

The principal challenge in biometric authentication is to mitigate the effects of any noise while extracting biometric features for biometric template generation. Most biometric systems are developed under the assumption that the extracted biometrics and the nature of their associated interferences are linear, stationary, and homogeneous. When these assumptions are violated due to nonlinear, nonstationary, and heterogeneous noise, the authentication performance deteriorates. As well, demands for biometric templates are on the rise in the field of information technology, leading to an increase in the vulnerability of stored and dynamic information. Thus, the development of a sophisticated authentication and encryption method is necessary to address these challenges. This dissertation proposes a new Sequential Subspace Estimator (SSE) algorithm for biometric authentication. In the proposed method, a sequential estimator is being designed in the image subspace that addresses challenges arising from nonlinear, nonstationary, and heterogeneous noise. The proposed method includes a subspace technique that overcomes the computational complexity associated with the sequential estimator. In addition, it includes a novel MultiBiometrics encryption algorithm that protects the biometric templates against security, privacy, and unlinkability attacks. Unlike current biometric encryption, this method uses cryptographic keys in conjunction with extracted MultiBiometrics to create cryptographic bonds, called “BioCryptoBond”. To further enhance system security and improve authentication accuracy, the development of a biometric database management system is also being considered. The proposed method is being tested on images from three public databases: the “Put Face Database”, the “Indian Face Database”, and the “CASIA Fingerprint Image Database Version 5.1”. The performance of the proposed solution has been evaluated using the Equal Error Rate (EER) and Correct Recognition Rate (CRR). The experimental results demonstrate the superiority of the proposed method in comparison to its counterparts.


Author(s):  
Hashiru Isiaka Muhammad ◽  
Kabir Ibrahim Musa ◽  
Mustapha Lawal Abdulrahman ◽  
Abdullahi Abubakar ◽  
Kabiru Umar ◽  
...  

In this paper, we present a new face detection scheme using deep learning and achieving state-of-the-art recognition performance using real-world datasets.  We designed and implemented a face recognition system using Principal Component Analysis (PCA) and Faster R Convolutional Neural Network (Faster R CNN). In particular, we improve the state-of-the-art Faster RCNN framework by using Principal Component Analysis (PCA) technique and Faster R CNN to detect and recognise faces in a face database.  The Principal Component Analysis (PCA) was used to extract features and dimensionality reduction from the face database, while the Faster R Convolutional Neural Network algorithm was used to identify patterns in the dataset via training the neural network. The three real-world datasets used in our experiment are ORL, Yale, and California face dataset. When implemented on the ORL face dataset, the algorithm achieved average recognition accuracy of 99%, with a recognition time of 147.72 seconds for 10 runs, and the recognition time/image was 0.3 sec/image on 400 images. The Yale face dataset achieved average recognition accuracy of 99.24% with a recognition time of 63.45 seconds for 10 runs, and the recognition time/image was 0.53 sec/image on 120 images. Finally, on California Face Database (CFD), it achieved average recognition accuracy of 99.52% with a recognition time of 226.05 seconds for 10 runs, and the recognition time/image was 0.27 sec/image on 827 images. On the CFD dataset, however, the proposed approach has excellent classification performance when the recall ratio is high. The proposed method achieves a higher recall and accuracy ratio than the Faster RCNN without PCA method. For the F-score, the proposed method achieved 0.98, which is significantly higher than the 0.95 achieved by the Faster-RCNN. This demonstrates the superiority of our model performance-wise as against state-of-the-art, both in terms of accuracy and fast recognition. Therefore our model is more efficient when compared to the latest researches done in the area of facial recognition.


Author(s):  
Samer Kais Jameel ◽  
Jafar Majidpour

Recently, numerous challenging problems have existed for transforming different image types (thermal infrared (TIR), visible spectrum, and near-infrared (NIR)). Other types of cameras may lack the ability and features of certain types of frequently-used cameras that produce different types of images. Based on camera features, different applications might emerge from observing a scenario under specific conditions (darkness, fog, night, day, and artificial light). We need to jump from one field to another to understand the scenario better. This paper proposes a fully automatic model (GVTI-AE) to manipulate the transformation into different types of vibrant, realistic images using the AutoEncoder method, which requires neither pre-nor post-processing or any user input. The experiments carried out using the GVTI-AE model showed that the perceptually realistic results produced in the widely available datasets (Tecnocampus Hand Image Database, Carl dataset, and IRIS Thermal/Visible Face Database).


Sign in / Sign up

Export Citation Format

Share Document