ALGORITHMS FOR IMAGE PRE-PROCESSING IN THE FACE IDENTIFICATION SYSTEM IN THE VIDEO STREAM

2019 ◽  
Vol 4 (91) ◽  
pp. 21-29 ◽  
Author(s):  
Yaroslav Trofimenko ◽  
Lyudmila Vinogradova ◽  
Evgeniy Ershov
2018 ◽  
Vol 7 (3.34) ◽  
pp. 237
Author(s):  
R Aswini Priyanka ◽  
C Ashwitha ◽  
R Arun Chakravarthi ◽  
R Prakash

In scientific world, Face recognition becomes an important research topic. The face identification system is an application capable of verifying a human face from a live videos or digital images. One of the best methods is to compare the particular facial attributes of a person with the images and its database. It is widely used in biometrics and security systems. Back in old days, face identification was a challenging concept. Because of the variations in viewpoint and facial expression, the deep learning neural network came into the technology stack it’s been very easy to detect and recognize the faces. The efficiency has increased dramatically. In this paper, ORL database is about the ten images of forty people helps to evaluate our methodology. We use the concept of Back Propagation Neural Network (BPNN) in deep learning model is to recognize the faces and increase the efficiency of the model compared to previously existing face recognition models.   


2019 ◽  
Vol 8 (4) ◽  
pp. 12888-12891

Face Identification System using a fast genetic algorithm computation (FGA) is presented. FGA is used to compute and search the face in a database. The objective of the work is to make a face identification system which can recognize face from a given image or any other image streaming system like webcam. The system also has to detect the face from a system accurately in order to identify the face accurately. The image can be captured either from a proposed webcam or a captured JPEG or PNG image or any other data source. The system needs training with adequate sample images to perform this operation. Training the generic system plays a vital role in identifying the face in an image. A tolerance is identified as a limit to the genetic algorithm which acts as a terminal condition to the evolution. A unique encoding is used which stores the facial features of a human face into numeric string which can be stored and searched with much ease thereby decreasing the search and computational time. Template matching technique is applied to identify the face in a big picture. Generation of an Eigen face is obtained by the stage a mathematical practice called PCA. Eigen Features is also computed such that the measurement of facial metrics is done using nodal point measurement.


2021 ◽  
Vol 11 (5) ◽  
pp. 2074
Author(s):  
Bohan Yoon ◽  
Hyeonji So ◽  
Jongtae Rhee

Recent improvements in the performance of the human face recognition model have led to the development of relevant products and services. However, research in the similar field of animal face identification has remained relatively limited due to the greater diversity and complexity in shape and the lack of relevant data for animal faces such as dogs. In the face identification model using triplet loss, the length of the embedding vector is normalized by adding an L2-normalization (L2-norm) layer for using cosine-similarity-based learning. As a result, object identification depends only on the angle, and the distribution of the embedding vector is limited to the surface of a sphere with a radius of 1. This study proposes training the model from which the L2-norm layer is removed by using the triplet loss to utilize a wide vector space beyond the surface of a sphere with a radius of 1, for which a novel loss function and its two-stage learning method. The proposed method classifies the embedding vector within a space rather than on the surface, and the model’s performance is also increased. The accuracy, one-shot identification performance, and distribution of the embedding vectors are compared between the existing learning method and the proposed learning method for verification. The verification was conducted using an open-set. The resulting accuracy of 97.33% for the proposed learning method is approximately 4% greater than that of the existing learning method.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yichao Ma ◽  
Zengxi Huang ◽  
Xiaoming Wang ◽  
Kai Huang

In the recent years, we have witnessed the rapid development of face recognition, though it is still plagued by variations such as facial expressions, pose, and occlusion. In contrast to the face, the ear has a stable 3D structure and is nearly unaffected by aging and expression changes. Both the face and ear can be captured from a distance and in a nonintrusive manner, which makes them applicable to a wider range of application domains. Together with their physiological structure and location, the ear can readily serve as supplement to the face for biometric recognition. It has been a trend to combine the face and ear to develop nonintrusive multimodal recognition for improved accuracy, robustness, and security. However, when either the face or the ear suffers from data degeneration, if the fusion rule is fixed or with inferior flexibility, a multimodal system may perform worse than the unimodal system using only the modality with better quality sample. The biometric quality-based adaptive fusion is an avenue to address this issue. In this paper, we present an overview of the literature about multimodal biometrics using the face and ear. All the approaches are classified into categories according to their fusion levels. In the end, we pay particular attention to an adaptive multimodal identification system, which adopts a general biometric quality assessment (BQA) method and dynamically integrates the face and ear via sparse representation. Apart from a refinement of the BQA and fusion weights selection, we extend the experiments for a more thorough evaluation by using more datasets and more types of image degeneration.


Sign in / Sign up

Export Citation Format

Share Document