The research and implementation of the method of pretreating the face images based on OpenCV machine visual library

Author(s):  
Liu Jing ◽  
Xinli Liu ◽  
Guofu Yin
Keyword(s):  
2021 ◽  
pp. 1-11
Author(s):  
Suphawimon Phawinee ◽  
Jing-Fang Cai ◽  
Zhe-Yu Guo ◽  
Hao-Ze Zheng ◽  
Guan-Chen Chen

Internet of Things is considerably increasing the levels of convenience at homes. The smart door lock is an entry product for smart homes. This work used Raspberry Pi, because of its low cost, as the main control board to apply face recognition technology to a door lock. The installation of the control sensing module with the GPIO expansion function of Raspberry Pi also improved the antitheft mechanism of the door lock. For ease of use, a mobile application (hereafter, app) was developed for users to upload their face images for processing. The app sends the images to Firebase and then the program downloads the images and captures the face as a training set. The face detection system was designed on the basis of machine learning and equipped with a Haar built-in OpenCV graphics recognition program. The system used four training methods: convolutional neural network, VGG-16, VGG-19, and ResNet50. After the training process, the program could recognize the user’s face to open the door lock. A prototype was constructed that could control the door lock and the antitheft system and stream real-time images from the camera to the app.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takao Fukui ◽  
Mrinmoy Chakrabarty ◽  
Misako Sano ◽  
Ari Tanaka ◽  
Mayuko Suzuki ◽  
...  

AbstractEye movements toward sequentially presented face images with or without gaze cues were recorded to investigate whether those with ASD, in comparison to their typically developing (TD) peers, could prospectively perform the task according to gaze cues. Line-drawn face images were sequentially presented for one second each on a laptop PC display, and the face images shifted from side-to-side and up-and-down. In the gaze cue condition, the gaze of the face image was directed to the position where the next face would be presented. Although the participants with ASD looked less at the eye area of the face image than their TD peers, they could perform comparable smooth gaze shift to the gaze cue of the face image in the gaze cue condition. This appropriate gaze shift in the ASD group was more evident in the second half of trials in than in the first half, as revealed by the mean proportion of fixation time in the eye area to valid gaze data in the early phase (during face image presentation) and the time to first fixation on the eye area. These results suggest that individuals with ASD may benefit from the short-period trial experiment by enhancing the usage of gaze cue.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


i-Perception ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 204166952110563
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Recognizing familiar faces requires a comparison of the incoming perceptual information with mental face representations stored in memory. Mounting evidence indicates that these representations adapt quickly to recently perceived facial changes. This becomes apparent in face adaptation studies where exposure to a strongly manipulated face alters the perception of subsequent face stimuli: original, non-manipulated face images then appear to be manipulated, while images similar to the adaptor are perceived as “normal.” The face adaptation paradigm serves as a good tool for investigating the information stored in facial memory. So far, most of the face adaptation studies focused on configural (second-order relationship) face information, mainly neglecting non-configural face information (i.e., that does not affect spatial face relations), such as color, although several (non-adaptation) studies were able to demonstrate the importance of color information in face perception and identification. The present study therefore focuses on adaptation effects on saturation color information and compares the results with previous findings on brightness. The study reveals differences in the effect pattern and robustness, indicating that adaptation effects vary considerably even within the same class of non-configural face information.


Author(s):  
Shivkaran Ravidas ◽  
M. A. Ansari

<span lang="EN-US">In the recent past, convolutional neural networks (CNNs) have seen resurgence and have performed extremely well on vision tasks.  Visually the model resembles a series of layers each of which is processed by a function to form a next layer. It is argued that CNN first models the low level features such as edges and joints and then expresses higher level features as a composition of these low level features. The aim of this paper is to detect multi-view faces using deep convolutional neural network (DCNN). Implementation, detection and retrieval of faces will be obtained with the help of direct visual matching technology. Further, the probabilistic measure of the similarity of the face images will be done using Bayesian analysis. Experiment detects faces with ±90 degree out of plane rotations. Fine tuned AlexNet is used to detect pose invariant faces. For this work, we extracted examples of training from AFLW (Annotated Facial Landmarks in the Wild) dataset that involve 21K images with 24K annotations of the face.</span>


2021 ◽  
Author(s):  
Yongtai Liu ◽  
Zhijun Yin ◽  
Zhiyu Wan ◽  
Chao Yan ◽  
Weiyi Xia ◽  
...  

BACKGROUND As direct-to-consumer genetic testing (DTC-GT) services have grown in popularity, the public has increasingly relied upon online forums to discuss and share their test results. Initially, users did so under a pseudonym, but more recently, they have included face images when discussing DTC-GT results. When these images truthfully represent a user, they reveal the identity of the corresponding individual. Various studies have shown that sharing images in social media tends to elicit more replies. However, users who do this clearly forgo their privacy. OBJECTIVE This study aimed to investigate the face image sharing behavior of DTC-GT users in an online environment and determine if there exists the association between face image sharing and the attention received from others. METHODS This study focused on r/23andme, a subreddit dedicated to discussing DTC-GT results and their implications. We applied natural language processing to infer the themes associated with posts that included a face image. We applied a regression analysis to learn the association between the attention that a post received, in terms of the number of comments and karma scores (defined as the number of upvotes minus the number of downvotes), and whether the post contains a face image. RESULTS We collected over 15,000 posts from the r/23andme subreddit published between 2012 and 2020. Face image posting began in late 2019 and grew rapidly, with over 800 individuals’ revealing their faces by early 2020. The topics in posts including a face were primarily about sharing or discussing ancestry composition, and sharing family reunion photos with relatives discovered via DTC-GT. On average, posts including a face received 60% (5/8) more comments than other posts, and these posts had karma scores 2.4 times higher than other posts. CONCLUSIONS DTC-GT consumers in the r/23andme subreddit are increasingly posting face images and testing reports on social platforms. The association between face image posting and a greater level of attention suggests that people are forgoing their privacy in exchange for attention from others. To mitigate the risk of face image posting, platforms, or at least subreddit organizers, should inform users about the consequence of such behavior for identity disclosure.


Author(s):  
Ayan Seal ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri ◽  
Dipak Kumar Basu

Automatic face recognition has been comprehensively studied for more than four decades, since face recognition of individuals has many applications, particularly in human-machine interaction and security. Although face recognition systems have achieved a significant level of maturity with some realistic achievement, face recognition still remains a challenging problem due to large variation in face images. Face recognition techniques can be generally divided into three categories based on the face image acquisition methodology: methods that work on intensity images, those that deal with video sequences, and those that require other sensory (like 3D sensory or infra-red imagery) data. Researchers are using thermal infrared images for face recognition. Since thermal infrared images have some advantages over 2D images. In this chapter, an overview of some of the well-known techniques of face recognition using thermal infrared faces are discussed, and some of the drawbacks and benefits of each of these methods mentioned therein are discussed. This chapter talks about some of the most recent algorithms developed for this purpose, and tries to give a brief idea of the state of the art of face recognition technology. The authors propose one approach for evaluating the performance of face recognition algorithms using thermal infrared images. They also note the results of several classifiers on a benchmark dataset (Terravic Facial Infrared Database).


Author(s):  
Stefano Berretti ◽  
Alberto Del Bimbo ◽  
Pietro Pala

In this paper, an original hybrid 2D-3D face recognition approach is proposed using two orthogonal face images, frontal and side views of the face, to reconstruct the complete 3D geometry of the face. This is obtained using a model based solution, in which a 3D template face model is morphed according to the correspondence of a limited set of control points identified on the frontal and side images in addition to the model. Control points identification is driven by an Active Shape Model applied to the frontal image, whereas subsequent manual assistance is required for control points localization on the side view. The reconstructed 3D model is finally matched, using the iso-geodesic regions approach against a gallery of 3D face scans for the purpose of face recognition. Preliminary experimental results are provided on a small database showing the viability of the approach.


2019 ◽  
Vol 63 (3) ◽  
pp. 479-493 ◽  
Author(s):  
Wadood Abdul ◽  
Ohoud Nafea ◽  
Sanaa Ghouzali

AbstractThere are a number of issues related to the development of biometric authentication systems, such as privacy breach, consequential security and biometric template storage. Thus, the current paper aims to address these issues through the hybrid approach of watermarking with biometric encryption. A multimodal biometric template protection approach with fusion at score level using fingerprint and face templates is proposed. The proposed approach includes two basic stages, enrollment stage and verification stage. During the enrollment stage, discrete wavelet transform (DWT) is applied on the face images to embed the fingerprint features into different directional sub-bands. Watermark embedding and extraction are done by quantizing the mean values of the wavelet coefficients. Subsequently, the inverse DWT is applied to obtain the watermarked image. Following this, a unique token is assigned for each genuine user and a hyper-chaotic map is used to produce a key stream in order to encrypt a watermarked image using block-cipher. The experimentation results indicate the efficiency of the proposed approach in term of achieving a reasonable error rate of 3.87%.


2011 ◽  
pp. 5-44 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

Face detection is the most fundamental step for the research on image-based automated face analysis such as face tracking, face recognition, face authentication, facial expression recognition and facial gesture recognition. When a novel face image is given we must know where the face is located, and how large the scale is to limit our concern to the face patch in the image and normalize the scale and orientation of the face patch. Usually, the face detection results are not stable; the scale of the detected face rectangle can be larger or smaller than that of the real face in the image. Therefore, many researchers use eye detectors to obtain stable normalized face images. Because the eyes have salient patterns in the human face image, they can be located stably and used for face image normalization. The eye detection becomes more important when we want to apply model-based face image analysis approaches.


Sign in / Sign up

Export Citation Format

Share Document