scholarly journals Facial image recognition for biometric authentication systems using a combination of geometrical feature points and low-level visual features

Author(s):  
M. Vasanthi ◽  
K. Seetharaman
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yunjun Nam ◽  
Takayuki Sato ◽  
Go Uchida ◽  
Ekaterina Malakhova ◽  
Shimon Ullman ◽  
...  

AbstractHumans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


Information ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 19
Author(s):  
Alexey Semenkov ◽  
Dmitry Bragin ◽  
Yakov Usoltsev ◽  
Anton Konev ◽  
Evgeny Kostuchenko

Modern facial recognition algorithms make it possible to identify system users by their appearance with a high level of accuracy. In such cases, an image of the user’s face is converted to parameters that later are used in a recognition process. On the other hand, the obtained parameters can be used as data for pseudo-random number generators. However, the closeness of the sequence generated by such a generator to a truly random one is questionable. This paper proposes a system which is able to authenticate users by their face, and generate pseudo-random values based on the facial image that will later serve to generate an encryption key. The generator of a random value was tested with the NIST Statistical Test Suite. The subsystem of image recognition was also tested under various conditions of taking the image. The test results of the random value generator show a satisfactory level of randomness, i.e., an average of 0.47 random generation (NIST test), with 95% accuracy of the system as a whole.


2018 ◽  
Vol 78 (11) ◽  
pp. 14799-14822 ◽  
Author(s):  
Soumendu Chakraborty ◽  
Satish Kumar Singh ◽  
Pavan Chakraborty

2021 ◽  
Author(s):  
Maryam Nematollahi Arani

Object recognition has become a central topic in computer vision applications such as image search, robotics and vehicle safety systems. However, it is a challenging task due to the limited discriminative power of low-level visual features in describing the considerably diverse range of high-level visual semantics of objects. Semantic gap between low-level visual features and high-level concepts are a bottleneck in most systems. New content analysis models need to be developed to bridge the semantic gap. In this thesis, algorithms based on conditional random fields (CRF) from the class of probabilistic graphical models are developed to tackle the problem of multiclass image labeling for object recognition. Image labeling assigns a specific semantic category from a predefined set of object classes to each pixel in the image. By well capturing spatial interactions of visual concepts, CRF modeling has proved to be a successful tool for image labeling. This thesis proposes novel approaches to empowering the CRF modeling for robust image labeling. Our primary contributions are twofold. To better represent feature distributions of CRF potentials, new feature functions based on generalized Gaussian mixture models (GGMM) are designed and their efficacy is investigated. Due to its shape parameter, GGMM can provide a proper fit to multi-modal and skewed distribution of data in nature images. The new model proves more successful than Gaussian and Laplacian mixture models. It also outperforms a deep neural network model on Corel imageset by 1% accuracy. Further in this thesis, we apply scene level contextual information to integrate global visual semantics of the image with pixel-wise dense inference of fully-connected CRF to preserve small objects of foreground classes and to make dense inference robust to initial misclassifications of the unary classifier. Proposed inference algorithm factorizes the joint probability of labeling configuration and image scene type to obtain prediction update equations for labeling individual image pixels and also the overall scene type of the image. The proposed context-based dense CRF model outperforms conventional dense CRF model by about 2% in terms of labeling accuracy on MSRC imageset and by 4% on SIFT Flow imageset. Also, the proposed model obtains the highest scene classification rate of 86% on MSRC dataset.


Author(s):  
Anne H.H. Ngu ◽  
Jialie Shen ◽  
John Shepherd

The optimized distance-based access methods currently available for multimedia databases are based on two major assumptions: a suitable distance function is known a priori, and the dimensionality of image features is low. The standard approach to building image databases is to represent images via vectors based on low-level visual features and make retrieval based on these vectors. However, due to the large gap between the semantic notions and low-level visual content, it is extremely difficult to define a distance function that accurately captures the similarity of images as perceived by humans. Furthermore, popular dimension reduction methods suffer from either the inability to capture the nonlinear correlations among raw data or very expensive training cost. To address the problems, in this chapter we introduce a new indexing technique called Combining Multiple Visual Features (CMVF) that integrates multiple visual features to get better query effectiveness. Our approach is able to produce low-dimensional image feature vectors that include not only low-level visual properties but also high-level semantic properties. The hybrid architecture can produce feature vectors that capture the salient properties of images yet are small enough to allow the use of existing high-dimensional indexing methods to provide efficient and effective retrieval.


Sign in / Sign up

Export Citation Format

Share Document