scholarly journals Looking at my own Face: Visual Processing Strategies in Physical Self-representation

2016 ◽  
Author(s):  
Anya Chakraborty ◽  
Bhismadev Chakrabarti

AbstractWe live in an age of ‘selfies’. Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if visual processing of self-faces is different from other faces, using psychophysics and eye-tracking. Specifically, the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition was tested. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look at lower part of the face for longer duration for self-face compared to other-face. Participants with a reduced overlap between self and other face representations, as indexed by a steeper slope of the psychometric response curve for self-face recognition, spent a greater proportion of time looking at the upper regions of faces identified as self. Additionally, the association of autism-related traits with self-face processing metrics was tested, since autism has previously been associated with atypical self-processing, particularly in the psychological domain. Autistic traits were associated with reduced looking time to both self and other faces. However, no self-face specific association was noted with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


Author(s):  
Taha H. Rassem ◽  
Nasrin M. Makbol ◽  
Sam Yin Yee

Nowadays, face recognition becomes one of the important topics in the computer vision and image processing area. This is due to its importance where can be used in many applications. The main key in the face recognition is how to extract distinguishable features from the image to perform high recognition accuracy.  Local binary pattern (LBP) and many of its variants used as texture features in many of face recognition systems. Although LBP performed well in many fields, it is sensitive to noise, and different patterns of LBP may classify into the same class that reduces its discriminating property. Completed Local Ternary Pattern (CLTP) is one of the new proposed texture features to overcome the drawbacks of the LBP. The CLTP outperformed LBP and some of its variants in many fields such as texture, scene, and event image classification.  In this study, we study and investigate the performance of CLTP operator for face recognition task. The Japanese Female Facial Expression (JAFFE), and FEI face databases are used in the experiments. In the experimental results, CLTP outperformed some previous texture descriptors and achieves higher classification rate for face recognition task which has reached up 99.38% and 85.22% in JAFFE and FEI, respectively.


1997 ◽  
Vol 9 (5) ◽  
pp. 555-604 ◽  
Author(s):  
Morris Moscovitch ◽  
Gordon Winocur ◽  
Marlene Behrmann

In order to study face recognition in relative isolation from visual processes that may also contribute to object recognition and reading, we investigated CK, a man with normal face recognition but with object agnosia and dyslexia caused by a closed-head injury. We administered recognition tests of up right faces, of family resemblance, of age-transformed faces, of caricatures, of cartoons, of inverted faces, and of face features, of disguised faces, of perceptually degraded faces, of fractured faces, of faces parts, and of faces whose parts were made of objects. We compared CK's performance with that of at least 12 control participants. We found that CK performed as well as controls as long as the face was upright and retained the configurational integrity among the internal facial features, the eyes, nose, and mouth. This held regardless of whether the face was disguised or degraded and whether the face was represented as a photo, a caricature, a cartoon, or a face composed of objects. In the last case, CK perceived the face but, unlike controls, was rarely aware that it was composed of objects. When the face, or just the internal features, were inverted or when the configurational gestalt was broken by fracturing the face or misaligning the top and bottom halves, CK's performance suffered far more than that of controls. We conclude that face recognition normally depends on two systems: (1) a holistic, face-specific system that is dependent on orientationspecific coding of second-order relational features (internal), which is intact in CK and (2) a part-based object-recognition system, which is damaged in CK and which contributes to face recognition when the face stimulus does not satisfy the domain-specific conditions needed to activate the face system.


2001 ◽  
Vol 15 (4) ◽  
pp. 275-285 ◽  
Author(s):  
Melissa S. James ◽  
Stuart J. Johnstone ◽  
William G. Hayward

Abstract The effects of manipulating configural and feature information on the face recognition process were investigated by recording event-related potentials (ERPs) from five electrode sites (Fz, Cz, Pz, T5, T6), while 17 European subjects performed an own-race and other-race face recognition task. A series of upright faces were presented in a study phase, followed by a test phase where subjects indicated whether inverted and upright faces were studied or novel via a button press response. An inversion effect, illustrating the disruption of upright configural information, was reflected in accuracy measures and in greater lateral N2 amplitude to inverted faces, suggesting that structural encoding is harder for inverted faces. An own-race advantage was found, which may reflect the use of configural encoding for the more frequently experienced own-race faces, and feature-based encoding for the less familiar other-race faces, and was reflected in accuracy measures and ERP effects. The midline N2 was larger to configurally encoded faces (i. e., own-race and upright), possibly suggesting configural encoding involves more complex processing than feature-based encoding. An N400-like component was sensitive to feature manipulations, with greater amplitude to other-race than own-race faces and to inverted than upright faces. This effect was interpreted as reflecting increased activation of incompatible representations activated by a feature-based strategy used in processing of other-race and inverted faces. The late positive complex was sensitive to configural manipulation with larger amplitude to other-race than own-race faces, and was interpreted as reflecting the updating of an own-race norm used in face recognition, to incorporate other-race information.


2018 ◽  
Author(s):  
Ciara Greene ◽  
Esther Suess ◽  
Yazeed Kelly

Atypical emotional face processing strategies have been observed in people with autism, and it has been suggested that these may extend in milder form to the general population. The relationship between autistic traits (AT) and gaze behaviour was investigated in a neurotypical adult sample who viewed three videos featuring a happy, fearful and neutral face. Eye-tracking data showed that participants looked longer at the faces (relative to the background) in the emotional conditions than in the neutral condition. As predicted, participants spent more time looking at the eyes during the fearful relative to the happy condition, and more time looking at the mouth during the happy condition. AT did not influence viewing patterns, time to first fixation or number of early fixations in any of the videos. We conclude that AT in the general population does not affect visual processing of emotional faces. More complex social scenes may be needed to reveal a relationship between AT and emotional processing.


Perception ◽  
10.1068/p5779 ◽  
2007 ◽  
Vol 36 (9) ◽  
pp. 1368-1374 ◽  
Author(s):  
Richard Russell ◽  
Pawan Sinha

The face recognition task we perform most often in everyday experience is the identification of people with whom we are familiar. However, because of logistical challenges, most studies focus on unfamiliar-face recognition, wherein subjects are asked to match or remember images of unfamiliar people's faces. Here we explore the importance of two facial attributes—shape and surface reflectance—in the context of a familiar-face recognition task. In our experiment, subjects were asked to recognise color images of the faces of their friends. The images were manipulated such that only reflectance or only shape information was useful for recognizing any particular face. Subjects were actually better at recognizing their friends' faces from reflectance information than from shape information. This provides evidence that reflectance information is important for face recognition in ecologically relevant contexts.


2007 ◽  
Vol 19 (11) ◽  
pp. 1836-1844 ◽  
Author(s):  
Kartik K. Sreenivasan ◽  
Jennifer Katz ◽  
Amishi P. Jha

We investigated the top-down influence of working memory (WM) maintenance on feedforward perceptual processing within occipito-temporal face processing structures. During event-related potential (ERP) recordings, subjects performed a delayed-recognition task requiring WM maintenance of faces or houses. The face-sensitive N170 component elicited by delay-spanning task-irrelevant grayscale noise probes was examined. If early feedforward perceptual activity is biased by maintenance requirements, the N170 ERP component elicited by probes should have a greater N170 amplitude response during face relative to house WM trials. Consistent with this prediction, N170 elicited by probes presented at the beginning, middle, and end of the delay interval was greater in amplitude during face relative to house WM. Thus, these results suggest that WM maintenance demands may modulate early feedforward perceptual processing for the entirety of the delay duration. We argue based on these results that temporally early biasing of domain-specific perceptual processing may be a critical mechanism by which WM maintenance is achieved.


2009 ◽  
Vol 21 (4) ◽  
pp. 625-641 ◽  
Author(s):  
Jürgen M. Kaufmann ◽  
Stefan R. Schweinberger ◽  
A. Mike Burton

We used ERPs to investigate neural correlates of face learning. At learning, participants viewed video clips of unfamiliar people, which were presented either with or without voices providing semantic information. In a subsequent face-recognition task (four trial blocks), learned faces were repeated once per block and presented interspersed with novel faces. To disentangle face from image learning, we used different images for face repetitions. Block effects demonstrated that engaging in the face-recognition task modulated ERPs between 170 and 900 msec poststimulus onset for learned and novel faces. In addition, multiple repetitions of different exemplars of learned faces elicited an increased bilateral N250. Source localizations of this N250 for learned faces suggested activity in fusiform gyrus, similar to that found previously for N250r in repetition priming paradigms [Schweinberger, S. R., Pickering, E. C., Jentzsch, I., Burton, A. M., & Kaufmann, J. M. Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research, 14, 398–409, 2002]. Multiple repetitions of learned faces also elicited increased central–parietal positivity between 400 and 600 msec and caused a bilateral increase of inferior–temporal negativity (>300 msec) compared with novel faces. Semantic information at learning enhanced recognition rates. Faces that had been learned with semantic information elicited somewhat less negative amplitudes between 700 and 900 msec over left inferior–temporal sites. Overall, the findings demonstrate a role of the temporal N250 ERP in the acquisition of new face representations across different images. They also suggest that, compared with visual presentation alone, additional semantic information at learning facilitates postperceptual processing in recognition but does not facilitate perceptual analysis of learned faces.


2021 ◽  
pp. 1-14
Author(s):  
N Kavitha ◽  
K Ruba Soundar ◽  
T Sathis Kumar

In recent years, the Face recognition task has been an active research area in computer vision and biometrics. Many feature extraction and classification algorithms are proposed to perform face recognition. However, the former usually suffer from the wide variations in face images, while the latter usually discard the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the Key points Local Binary/Tetra Pattern (KP-LTrP) and Improved Hough Transform (IHT) with the Improved DragonFly Algorithm-Kernel Ensemble Learning Machine (IDFA-KELM) is proposed to address the face recognition problem in unconstrained conditions. Initially, the face images are collected from the publicly available dataset. Then noises in the input image are removed by performing preprocessing using Adaptive Kuwahara filter (AKF). After preprocessing, the face from the preprocessed image is detected using the Tree-Structured Part Model (TSPM) structure. Then, features, such as KP-LTrP, and IHT are extracted from the detected face and the extracted feature is reduced using the Information gain based Kernel Principal Component Analysis (IG-KPCA) algorithm. Then, finally, these reduced features are inputted to IDFA-KELM for performing FR. The outcomes of the proposed method are examined and contrasted with the other existing techniques to confirm that the proposed IDFA-KELM detects human faces efficiently from the input images.


Sign in / Sign up

Export Citation Format

Share Document