Face Recognition Technology

Author(s):  
Sanjay K. Singh ◽  
Mayank Vatsa ◽  
Richa Singh ◽  
K.K. Shukla ◽  
Lokesh R. Boregowda

Face recognition technology is one of the most widely used problems in computer vision. It is widely used in applications related to security and human-computer interfaces. The two reasons for this are the wide range of commercial and law enforcement applications and the availability of feasible technologies. In this chapter the various biometric systems and the commonly used techniques of face recognition, Feature Based, eigenface based, Line Based Approach and Local Feature Analysis are explained along with the results. A performance comparison of these algorithms is also given.

2008 ◽  
pp. 3968-3999
Author(s):  
Sanjay K. Singh ◽  
Mayank Vatsa ◽  
Richa Singh ◽  
K. K. Shukla

Face recognition technology is one of the most widely used problems in computer vision. It is widely used in applications related to security and human-computer interfaces. The two reasons for this are the wide range of commercial and law enforcement applications and the availability of feasible technologies. In this chapter the various biometric systems and the commonly used techniques of face recognition, Feature Based, eigenface based, Line Based Approach and Local Feature Analysis are explained along with the results. A performance comparison of these algorithms is also given.


2022 ◽  
Vol 31 (2) ◽  
pp. 1-32
Author(s):  
Luca Ardito ◽  
Andrea Bottino ◽  
Riccardo Coppola ◽  
Fabrizio Lamberti ◽  
Francesco Manigrasso ◽  
...  

In automated Visual GUI Testing (VGT) for Android devices, the available tools often suffer from low robustness to mobile fragmentation, leading to incorrect results when running the same tests on different devices. To soften these issues, we evaluate two feature matching-based approaches for widget detection in VGT scripts, which use, respectively, the complete full-screen snapshot of the application ( Fullscreen ) and the cropped images of its widgets ( Cropped ) as visual locators to match on emulated devices. Our analysis includes validating the portability of different feature-based visual locators over various apps and devices and evaluating their robustness in terms of cross-device portability and correctly executed interactions. We assessed our results through a comparison with two state-of-the-art tools, EyeAutomate and Sikuli. Despite a limited increase in the computational burden, our Fullscreen approach outperformed state-of-the-art tools in terms of correctly identified locators across a wide range of devices and led to a 30% increase in passing tests. Our work shows that VGT tools’ dependability can be improved by bridging the testing and computer vision communities. This connection enables the design of algorithms targeted to domain-specific needs and thus inherently more usable and robust.


This book presents computational interaction as an approach to explaining and enhancing the interaction between humans and information technology. Computational interaction applies abstraction, automation, and analysis to inform our understanding of the structure of interaction and also to inform the design of the software that drives new and exciting human-computer interfaces. The methods of computational interaction allow, for example, designers to identify user interfaces that are optimal against some objective criteria. They also allow software engineers to build interactive systems that adapt their behaviour to better suit individual capacities and preferences. Embedded in an iterative design process, computational interaction has the potential to complement human strengths and provide methods for generating inspiring and elegant designs. Computational interaction does not exclude the messy and complicated behaviour of humans, rather it embraces it by, for example, using models that are sensitive to uncertainty and that capture subtle variations between individual users. It also promotes the idea that there are many aspects of interaction that can be augmented by algorithms. This book introduces computational interaction design to the reader by exploring a wide range of computational interaction techniques, strategies and methods. It explains how techniques such as optimisation, economic modelling, machine learning, control theory, formal methods, cognitive models and statistical language processing can be used to model interaction and design more expressive, efficient and versatile interaction.


2021 ◽  
Vol 6 ◽  
pp. 93-101
Author(s):  
Andrey Litvynchuk ◽  
◽  
Lesia Baranovska ◽  
◽  

Face recognition is one of the main tasks of computer vision, which is relevant due to its practical significance and great interest of wide range of scientists. It has many applications, which has led to a huge amount of research in this area. And although research in the field has been going on since the beginning of the computer vision, good results could be achieved only with the help of convolutional neural networks. In this work, a comparative analysis of facial recognition methods before convolutional neural networks was performed. A metric learning approach, augmentations and learning rate schedulers are considered. There were performed bunch of experiments and comparative analysis of the considered methods of improvement of convolutional neural networks. As a result a universal algorithm for training the face recognition model was obtained. In this work, we used SE-ResNet50 as the only neural network for experiments. Metric learning is a method by which it is possible to achieve good accuracy in face recognition. Overfitting is a big problem of neural networks, in particular because they have too many parameters and usually not enough data to guarantee the generalization of the model. Additional data labeling can be time-consuming and expensive, so there is such an approach as augmentation. Augmentations artificially increase the training dataset, so as expected, this method improved the results relative to the original experiment in all experiments. Different degrees and more aggressive forms of augmentation in this work led to better results. As expected, the best learning rate scheduler was cosine scheduler with warm-ups and restarts. This schedule has few parameters, so it is also easy to use. In general, using different approaches, we were able to obtain an accuracy of 93,5 %, which is 22 % better than the baseline experiment. In the following studies, it is planned to consider improving not only the model of facial recognition, but also detection. The accuracy of face detection directly depends on the quality of face recognition.


Author(s):  
M. PARISA BEHAM ◽  
S. MOHAMED MANSOOR ROOMI

Face recognition has become more significant and relevant in recent years owing to it potential applications. Since the faces are highly dynamic and pose more issues and challenges to solve, researchers in the domain of pattern recognition, computer vision and artificial intelligence have proposed many solutions to reduce such difficulties so as to improve the robustness and recognition accuracy. As many approaches have been proposed, efforts are also put in to provide an extensive survey of the methods developed over the years. The objective of this paper is to provide a survey of face recognition papers that appeared in the literature over the past decade under all severe conditions that were not discussed in the previous survey and to categorize them into meaningful approaches, viz. appearance based, feature based and soft computing based. A comparative study of merits and demerits of these approaches have been presented.


2021 ◽  
Vol 1 ◽  
pp. 87
Author(s):  
Konstantinos C. Apostolakis ◽  
Nikolaos Dimitriou ◽  
George Margetis ◽  
Stavroula Ntoa ◽  
Dimitrios Tzovaras ◽  
...  

Background: Augmented reality (AR) and artificial intelligence (AI) are highly disruptive technologies that have revolutionised practices in a wide range of domains. Their potential has not gone unnoticed in the security sector with several law enforcement agencies (LEAs) employing AI applications in their daily operations for forensics and surveillance. In this paper, we present the DARLENE ecosystem, which aims to bridge existing gaps in applying AR and AI technologies for rapid tactical decision-making in situ with minimal error margin, thus enhancing LEAs’ efficiency and Situational Awareness (SA). Methods: DARLENE incorporates novel AI techniques for computer vision tasks such as activity recognition and pose estimation, while also building an AR framework for visualization of the inferenced results via dynamic content adaptation according to each individual officer’s stress level and current context. The concept has been validated with end-users through co-creation workshops, while the decision-making mechanism for enhancing LEAs’ SA has been assessed with experts. Regarding computer vision components, preliminary tests of the instance segmentation method for humans’ and objects’ detection have been conducted on a subset of videos from the RWF-2000 dataset for violence detection, which have also been used to test a human pose estimation method that has so far exhibited impressive results and will constitute the basis of further developments in DARLENE. Results: Evaluation results highlight that target users are positive towards the adoption of the proposed solution in field operations, and that the SA decision-making mechanism produces highly acceptable outcomes. Evaluation of the computer vision components yielded promising results and identified opportunities for improvement. Conclusions: This work provides the context of the DARLENE ecosystem and presents the DARLENE architecture, analyses its individual technologies, and demonstrates preliminary results, which are positive both in terms of technological achievements and user acceptance of the proposed solution.


Technologies ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 2
Author(s):  
Ashish Jaiswal ◽  
Ashwin Ramesh Babu ◽  
Mohammad Zaki Zadeh ◽  
Debapriya Banerjee ◽  
Fillia Makedon

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.


Sign in / Sign up

Export Citation Format

Share Document