Unsupervised method of Domain Adaptation on representation of discriminatory regions of the face image for surveillance face datasets

Author(s):  
Suranjana Samanta ◽  
Samik Banerjee ◽  
Sukhendu Das
2020 ◽  
Author(s):  
João Renato Manesco ◽  
Aparecido Marana

In the last decades, for reasons of safety or convenience, biometric characteristics are increasingly being used to identify individuals who wish to have access to systems or places, and facial features are one of the most used characteristics for this purpose. For biometric identification to be effective, the recognition accuracy rates must be high. However, these rates can be very low depending on the difference (displacement) between the domain of the images stored in the database of the biometric system (source images) and the images used at the moment of identification (target images). In this work, we evaluated the performance of a domain adaptation method called Transfer Kernel Learning (TKL) in the face recognition problem. Results obtained in our experiments on two face datasets, ARFace and FRGC, corroborates that TKL is suitable for domain adaptation and that it is capable of improving significantly the accuracy rates of face recognition, even when considering facial images with occlusions, variations in illumination and complex backgrounds.


Author(s):  
Xiaolin Tang ◽  
Xiaogang Wang ◽  
Jin Hou ◽  
Huafeng Wu ◽  
Ping He

Introduction: Under complex illumination conditions such as poor light sources and light changes rapidly, there are two disadvantages of current gamma transform in preprocessing face image: one is that the parameters of transformation need to be set based on experience; the other is the details of the transformed image are not obvious enough. Objective: Improve the current gamma transform. Methods: This paper proposes a weighted fusion algorithm of adaptive gamma transform and edge feature extraction. First, this paper proposes an adaptive gamma transform algorithm for face image preprocessing, that is, the parameter of transformation generated by calculation according to the specific gray value of the input face image. Secondly, this paper uses Sobel edge detection operator to extract the edge information of the transformed image to get the edge detection image. Finally, this paper uses the adaptively transformed image and the edge detection image to obtain the final processing result through a weighted fusion algorithm. Results: The contrast of the face image after preprocessing is appropriate, and the details of the image are obvious. Conclusion: The method proposed in this paper can enhance the face image while retaining more face details, without human-computer interaction, and has lower computational complexity degree.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takao Fukui ◽  
Mrinmoy Chakrabarty ◽  
Misako Sano ◽  
Ari Tanaka ◽  
Mayuko Suzuki ◽  
...  

AbstractEye movements toward sequentially presented face images with or without gaze cues were recorded to investigate whether those with ASD, in comparison to their typically developing (TD) peers, could prospectively perform the task according to gaze cues. Line-drawn face images were sequentially presented for one second each on a laptop PC display, and the face images shifted from side-to-side and up-and-down. In the gaze cue condition, the gaze of the face image was directed to the position where the next face would be presented. Although the participants with ASD looked less at the eye area of the face image than their TD peers, they could perform comparable smooth gaze shift to the gaze cue of the face image in the gaze cue condition. This appropriate gaze shift in the ASD group was more evident in the second half of trials in than in the first half, as revealed by the mean proportion of fixation time in the eye area to valid gaze data in the early phase (during face image presentation) and the time to first fixation on the eye area. These results suggest that individuals with ASD may benefit from the short-period trial experiment by enhancing the usage of gaze cue.


2011 ◽  
Vol 55-57 ◽  
pp. 77-81
Author(s):  
Hui Ming Huang ◽  
He Sheng Liu ◽  
Guo Ping Liu

In this paper, we proposed an efficient method to address the problem of color face image segmentation that is based on color information and saliency map. This method consists of three stages. At first, skin colored regions is detected using a Bayesian model of the human skin color. Then, we get a chroma chart that shows likelihoods of skin colors. This chroma chart is further segmented into skin region that satisfy the homogeneity property of the human skin. The third stage, visual attention model are employed to localize the face region according to the saliency map while the bottom-up approach utilizes both the intensity and color features maps from the test image. Experimental evaluation on test shows that the proposed method is capable of segmenting the face area quite effectively,at the same time, our methods shows good performance for subjects in both simple and complex backgrounds, as well as varying illumination conditions and skin color variances.


2021 ◽  
pp. 1-15
Author(s):  
Yongjie Chu ◽  
Touqeer Ahmad ◽  
Lindu Zhao

Low-resolution face recognition with one-shot is a prevalent problem encountered in law enforcement, where it generally requires to recognize the low-resolution face images captured by surveillance cameras with the only one high-resolution profile face image in the database. The problem is very tough because the available samples is quite few and the quality of unknown images is quite low. To effectively address this issue, this paper proposes Adapted Discriminative Coupled Mappings (AdaDCM) approach, which integrates domain adaptation and discriminative learning. To achieve good domain adaptation performance for small size dataset, a new domain adaptation technique called Bidirectional Locality Matching-based Domain Adaptation (BLM-DA) is first developed. Then the proposed AdaDCM is formulated by unifying BLM-DA and discriminative coupled mappings into a single framework. AdaDCM is extensively evaluated on FERET, LFW, and SCface databases, which includes LR face images obtained in constrained, unconstrained, and real-world environment. The promising results on these datasets demonstrate the effectiveness of AdaDCM in LR face recognition with one-shot.


Author(s):  
Yongjie Chu ◽  
Yong Zhao ◽  
Touqeer Ahmad ◽  
Lindu Zhao

Numerous low-resolution (LR) face images are captured by a growing number of surveillance cameras nowadays. In some particular applications, such as suspect identification, it is required to recognize an LR face image captured by the surveillance camera using only one high-resolution (HR) profile face image on the ID card. This leads to LR face recognition with single sample per person (SSPP), which is more challenging than conventional LR face recognition or SSPP face recognition. To address this tough problem, we propose a Boosted Coupled Marginal Fisher Analysis (CMFA) approach, which unites domain adaptation and coupled mappings. An auxiliary database containing multiple HR and LR samples is introduced to explore more discriminative information, and locality preserving domain adaption (LPDA) is designed to realize good domain adaptation between SSPP training set (target domain) and auxiliary database (source domain). We perform LPDA on HR and LR images in both domains, then in the domain adaptation space we apply CMFA to learn the discriminative coupled mappings for classification. The learned coupled mappings embed knowledge from the auxiliary dataset, thus their discriminative ability is superior. We extensively evaluate the proposed method on FERET, LFW and SCface database, the promising results demonstrate its effectiveness on LR face recognition with SSPP.


2018 ◽  
Vol 9 (1) ◽  
pp. 60-77 ◽  
Author(s):  
Souhir Sghaier ◽  
Wajdi Farhat ◽  
Chokri Souani

This manuscript presents an improved system research that can detect and recognize the person in 3D space automatically and without the interaction of the people's faces. This system is based not only on a quantum computation and measurements to extract the vector features in the phase of characterization but also on learning algorithm (using SVM) to classify and recognize the person. This research presents an improved technique for automatic 3D face recognition using anthropometric proportions and measurement to detect and extract the area of interest which is unaffected by facial expression. This approach is able to treat incomplete and noisy images and reject the non-facial areas automatically. Moreover, it can deal with the presence of holes in the meshed and textured 3D image. It is also stable against small translation and rotation of the face. All the experimental tests have been done with two 3D face datasets FRAV 3D and GAVAB. Therefore, the test's results of the proposed approach are promising because they showed that it is competitive comparable to similar approaches in terms of accuracy, robustness, and flexibility. It achieves a high recognition performance rate of 95.35% for faces with neutral and non-neutral expressions for the identification and 98.36% for the authentification with GAVAB and 100% with some gallery of FRAV 3D datasets.


2018 ◽  
Vol 7 (4.10) ◽  
pp. 295
Author(s):  
Murali S ◽  
Manimaran A ◽  
Selvakumar K ◽  
Dinesh Kumar S

The secured web-based voting framework is the need of the present time. We propose another secure authentication for the online voting framework by utilizing face recognition and hashing algorithm. A simple verification process is accomplished during the initial registration process via email and phone. The voter is asked to give a unique identification number (UIN) provided by the election authority and face image at the time of main registration. This UIN is converted into a secret key using the SHA algorithm. The face image that is saved in the Amazon web service (AWS) acts as an authentication mechanism which enables people to cast their vote secretly. The voters, who cast numerous votes amid the way toward voting is guaranteed to be counteracted by encrypted UIN.  The election organizers can see the election parallelly as the voting is saved in the real-time database. The privacy of the voter is maintained as the details are converted into the key. In this system, an individual can vote from outside of his/her allocated constituency.  


2021 ◽  
Author(s):  
Yongtai Liu ◽  
Zhijun Yin ◽  
Zhiyu Wan ◽  
Chao Yan ◽  
Weiyi Xia ◽  
...  

BACKGROUND As direct-to-consumer genetic testing (DTC-GT) services have grown in popularity, the public has increasingly relied upon online forums to discuss and share their test results. Initially, users did so under a pseudonym, but more recently, they have included face images when discussing DTC-GT results. When these images truthfully represent a user, they reveal the identity of the corresponding individual. Various studies have shown that sharing images in social media tends to elicit more replies. However, users who do this clearly forgo their privacy. OBJECTIVE This study aimed to investigate the face image sharing behavior of DTC-GT users in an online environment and determine if there exists the association between face image sharing and the attention received from others. METHODS This study focused on r/23andme, a subreddit dedicated to discussing DTC-GT results and their implications. We applied natural language processing to infer the themes associated with posts that included a face image. We applied a regression analysis to learn the association between the attention that a post received, in terms of the number of comments and karma scores (defined as the number of upvotes minus the number of downvotes), and whether the post contains a face image. RESULTS We collected over 15,000 posts from the r/23andme subreddit published between 2012 and 2020. Face image posting began in late 2019 and grew rapidly, with over 800 individuals’ revealing their faces by early 2020. The topics in posts including a face were primarily about sharing or discussing ancestry composition, and sharing family reunion photos with relatives discovered via DTC-GT. On average, posts including a face received 60% (5/8) more comments than other posts, and these posts had karma scores 2.4 times higher than other posts. CONCLUSIONS DTC-GT consumers in the r/23andme subreddit are increasingly posting face images and testing reports on social platforms. The association between face image posting and a greater level of attention suggests that people are forgoing their privacy in exchange for attention from others. To mitigate the risk of face image posting, platforms, or at least subreddit organizers, should inform users about the consequence of such behavior for identity disclosure.


Sign in / Sign up

Export Citation Format

Share Document