scholarly journals Detecting Morphed Face Attacks Using Residual Noise from Deep Multi-scale Context Aggregation Network

Author(s):  
Sushma Venkatesh ◽  
Raghavendra Ramachandra ◽  
kiran Raja ◽  
Luuk J. Spreeuwers ◽  
Raymond Veldhuis ◽  
...  

<p> Along with the deployment of the Face Recognition Systems</p> <p>(FRS), concerns were raised related to the vulnerability</p> <p>of those systems towards various attacks including morphed</p> <p>attacks. The morphed face attack involves two different</p> <p>face images in order to obtain via a morphing process</p> <p>a resulting attack image, which is sufficiently similar</p> <p>to both contributing data subjects. The obtained morphed</p> <p>image can successfully be verified against both subjects visually</p> <p>(by a human expert) and by a commercial FRS. The</p> <p>face morphing attack poses a severe security risk to the</p> <p>e-passport issuance process and to applications like border</p> <p>control, unless such attacks are detected and mitigated.</p> <p>In this work, we propose a new method to reliably detect</p> <p>a morphed face attack using a newly designed denoising</p> <p>framework. To this end, we design and introduce a new</p> <p>deep Multi-scale Context Aggregation Network (MS-CAN)</p> <p>to obtain denoised images, which is subsequently used to</p> <p>determine if an image is morphed or not. Extensive experiments</p> <p>are carried out on three different morphed face image</p> <p>datasets. The Morphing Attack Detection (MAD) performance</p> <p>of the proposed method is also benchmarked against</p> <p>14 different state-of-the-art techniques using the ISO-IEC</p> <p>30107-3 evaluation metrics. Based on the obtained quantitative</p> <p>results, the proposed method has indicated the best</p> <p>performance on all three datasets and also on cross-dataset</p> <p>experiments.</p>

2020 ◽  
Author(s):  
Sushma Venkatesh ◽  
Raghavendra Ramachandra ◽  
kiran Raja ◽  
Luuk J. Spreeuwers ◽  
Raymond Veldhuis ◽  
...  

<p> Along with the deployment of the Face Recognition Systems</p> <p>(FRS), concerns were raised related to the vulnerability</p> <p>of those systems towards various attacks including morphed</p> <p>attacks. The morphed face attack involves two different</p> <p>face images in order to obtain via a morphing process</p> <p>a resulting attack image, which is sufficiently similar</p> <p>to both contributing data subjects. The obtained morphed</p> <p>image can successfully be verified against both subjects visually</p> <p>(by a human expert) and by a commercial FRS. The</p> <p>face morphing attack poses a severe security risk to the</p> <p>e-passport issuance process and to applications like border</p> <p>control, unless such attacks are detected and mitigated.</p> <p>In this work, we propose a new method to reliably detect</p> <p>a morphed face attack using a newly designed denoising</p> <p>framework. To this end, we design and introduce a new</p> <p>deep Multi-scale Context Aggregation Network (MS-CAN)</p> <p>to obtain denoised images, which is subsequently used to</p> <p>determine if an image is morphed or not. Extensive experiments</p> <p>are carried out on three different morphed face image</p> <p>datasets. The Morphing Attack Detection (MAD) performance</p> <p>of the proposed method is also benchmarked against</p> <p>14 different state-of-the-art techniques using the ISO-IEC</p> <p>30107-3 evaluation metrics. Based on the obtained quantitative</p> <p>results, the proposed method has indicated the best</p> <p>performance on all three datasets and also on cross-dataset</p> <p>experiments.</p>


2021 ◽  
Vol 11 (7) ◽  
pp. 3207
Author(s):  
Erion-Vasilis Pikoulis ◽  
Zafeiria-Marina Ioannou ◽  
Mersini Paschou ◽  
Evangelos Sakkopoulos

Face morphing poses a serious threat to Automatic Border Control (ABC) and Face Recognition Systems (FRS) in general. The aim of this paper is to present a qualitative assessment of the morphing attack issue, and the challenges it entails, highlighting both the technological and human aspects of the problem. Here, after the face morphing attack scenario is presented, the paper provides an overview of the relevant bibliography and recent advances towards two central directions. First, the morphing of face images is outlined with a particular focus on the three main steps that are involved in the process, namely, landmark detection, face alignment and blending. Second, the detection of morphing attacks is presented under the prism of the so-called on-line and off-line detection scenarios and whether the proposed techniques employ handcrafted features, using classical methods, or automatically generated features, using deep-learning-based methods. The paper, then, presents the evaluation metrics that are employed in the corresponding bibliography and concludes with a discussion on open challenges that need to be address for further advancing automatic detection of morphing attacks. Despite the progress being made, the general consensus of the research community is that significant effort and resources are needed in the near future for the mitigation of the issue, especially, towards the creation of datasets capturing the full extent of the problem at hand and the availability of reference evaluation procedures for comparing novel automatic attack detection algorithms.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takao Fukui ◽  
Mrinmoy Chakrabarty ◽  
Misako Sano ◽  
Ari Tanaka ◽  
Mayuko Suzuki ◽  
...  

AbstractEye movements toward sequentially presented face images with or without gaze cues were recorded to investigate whether those with ASD, in comparison to their typically developing (TD) peers, could prospectively perform the task according to gaze cues. Line-drawn face images were sequentially presented for one second each on a laptop PC display, and the face images shifted from side-to-side and up-and-down. In the gaze cue condition, the gaze of the face image was directed to the position where the next face would be presented. Although the participants with ASD looked less at the eye area of the face image than their TD peers, they could perform comparable smooth gaze shift to the gaze cue of the face image in the gaze cue condition. This appropriate gaze shift in the ASD group was more evident in the second half of trials in than in the first half, as revealed by the mean proportion of fixation time in the eye area to valid gaze data in the early phase (during face image presentation) and the time to first fixation on the eye area. These results suggest that individuals with ASD may benefit from the short-period trial experiment by enhancing the usage of gaze cue.


2021 ◽  
Author(s):  
Yongtai Liu ◽  
Zhijun Yin ◽  
Zhiyu Wan ◽  
Chao Yan ◽  
Weiyi Xia ◽  
...  

BACKGROUND As direct-to-consumer genetic testing (DTC-GT) services have grown in popularity, the public has increasingly relied upon online forums to discuss and share their test results. Initially, users did so under a pseudonym, but more recently, they have included face images when discussing DTC-GT results. When these images truthfully represent a user, they reveal the identity of the corresponding individual. Various studies have shown that sharing images in social media tends to elicit more replies. However, users who do this clearly forgo their privacy. OBJECTIVE This study aimed to investigate the face image sharing behavior of DTC-GT users in an online environment and determine if there exists the association between face image sharing and the attention received from others. METHODS This study focused on r/23andme, a subreddit dedicated to discussing DTC-GT results and their implications. We applied natural language processing to infer the themes associated with posts that included a face image. We applied a regression analysis to learn the association between the attention that a post received, in terms of the number of comments and karma scores (defined as the number of upvotes minus the number of downvotes), and whether the post contains a face image. RESULTS We collected over 15,000 posts from the r/23andme subreddit published between 2012 and 2020. Face image posting began in late 2019 and grew rapidly, with over 800 individuals’ revealing their faces by early 2020. The topics in posts including a face were primarily about sharing or discussing ancestry composition, and sharing family reunion photos with relatives discovered via DTC-GT. On average, posts including a face received 60% (5/8) more comments than other posts, and these posts had karma scores 2.4 times higher than other posts. CONCLUSIONS DTC-GT consumers in the r/23andme subreddit are increasingly posting face images and testing reports on social platforms. The association between face image posting and a greater level of attention suggests that people are forgoing their privacy in exchange for attention from others. To mitigate the risk of face image posting, platforms, or at least subreddit organizers, should inform users about the consequence of such behavior for identity disclosure.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1810
Author(s):  
Dat Tien Nguyen ◽  
Tuyen Danh Pham ◽  
Ganbayar Batchuluun ◽  
Kyoung Jun Noh ◽  
Kang Ryoung Park

Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.


2011 ◽  
pp. 5-44 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

Face detection is the most fundamental step for the research on image-based automated face analysis such as face tracking, face recognition, face authentication, facial expression recognition and facial gesture recognition. When a novel face image is given we must know where the face is located, and how large the scale is to limit our concern to the face patch in the image and normalize the scale and orientation of the face patch. Usually, the face detection results are not stable; the scale of the detected face rectangle can be larger or smaller than that of the real face in the image. Therefore, many researchers use eye detectors to obtain stable normalized face images. Because the eyes have salient patterns in the human face image, they can be located stably and used for face image normalization. The eye detection becomes more important when we want to apply model-based face image analysis approaches.


Author(s):  
Guojun Lin ◽  
Meng Yang ◽  
Linlin Shen ◽  
Mingzhong Yang ◽  
Mei Xie

For face recognition, conventional dictionary learning (DL) methods have some disadvantages. First, face images of the same person vary with facial expressions and pose, illumination and disguises, so it is hard to obtain a robust dictionary for face recognition. Second, they don’t cover important components (e.g., particularity and disturbance) completely, which limit their performance. In the paper, we propose a novel robust and discriminative DL (RDDL) model. The proposed model uses sample diversities of the same face image to learn a robust dictionary, which includes class-specific dictionary atoms and disturbance dictionary atoms. These atoms can well represent the data from different classes. Discriminative regularizations on the dictionary and the representation coefficients are used to exploit discriminative information, which improves effectively the classification capability of the dictionary. The proposed RDDL is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art dictionary learning methods for face recognition.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 190
Author(s):  
Zuodong Niu ◽  
Handong Li ◽  
Yao Li ◽  
Yingjie Mei ◽  
Jing Yang

Face image inpainting technology is an important research direction in image restoration. When the current image restoration methods repair the damaged areas of face images with weak texture, there are problems such as low accuracy of face image decomposition, unreasonable restoration structure, and degradation of image quality after inpainting. Therefore, this paper proposes an adaptive face image inpainting algorithm based on feature symmetry. Firstly, we locate the feature points of the face, and segment the face into four feature parts based on the feature point distribution to define the feature search range. Then, we construct a new mathematical model, introduce feature symmetry to improve priority calculation, and increase the reliability of priority calculation. After that, in the process of searching for matching blocks, we accurately locate similar feature blocks according to the relative position and symmetry criteria of the target block and various feature parts of the face. Finally, we introduced the HSV (Hue, Saturation, Value) color space to determine the best matching block according to the chroma and brightness of the sample, reduce the repair error, and complete the face image inpainting. During the experiment, we firstly performed visual evaluation and texture analysis on the inpainting face image, and the results show that the face image inpainting by our algorithm maintained the consistency of the face structure, and the visual observation was closer to the real face features. Then, we used the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as objective evaluation indicators; among the five sample face images inpainting results given in this paper, our method was better than the reference methods, and the average PSNR value improved from 2.881–5.776 dB using our method when inpainting 100 face images. Additionally, we used the time required for inpainting the unit pixel to evaluate the inpainting efficiency, and it was improved by 12%–49% with our method when inpainting 100 face images. Finally, by comparing the face image inpainting experiments with the generative adversary network (GAN) algorithm, we discuss some of the problems with the method in this paper based on graphics in repairing face images with large areas of missing features.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhixue Liang

In the contactless delivery scenario, the self-pickup cabinet is an important terminal delivery device, and face recognition is one of the efficient ways to achieve contactless access express delivery. In order to effectively recognize face images under unrestricted environments, an unrestricted face recognition algorithm based on transfer learning is proposed in this study. First, the region extraction network of the faster RCNN algorithm is improved to improve the recognition speed of the algorithm. Then, the first transfer learning is applied between the large ImageNet dataset and the face image dataset under restricted conditions. The second transfer learning is applied between face image under restricted conditions and unrestricted face image datasets. Finally, the unrestricted face image is processed by the image enhancement algorithm to increase its similarity with the restricted face image, so that the second transfer learning can be carried out effectively. Experimental results show that the proposed algorithm has better recognition rate and recognition speed on the CASIA-WebFace dataset, FLW dataset, and MegaFace dataset.


Sign in / Sign up

Export Citation Format

Share Document