Eyes as the Center of Focus in the Visual Examination of Human Faces

1978 ◽  
Vol 47 (3) ◽  
pp. 857-858 ◽  
Author(s):  
Stephen W. Janik ◽  
A. Rodney Wellens ◽  
Myron L. Goldberg ◽  
Louis F. Dell'Osso

An experiment was conducted to determine the degree to which individuals focus upon the eye region of others while visually inspecting their faces. Using an eye-tracking camera, 16 male subjects spent approximately 40% of their looking time focused upon the eye region of facial photographs, with each of the remaining parts of the face being looked at less.

2012 ◽  
Vol 37 (2) ◽  
pp. 95-99 ◽  
Author(s):  
Elisa Di Giorgio ◽  
David Méary ◽  
Olivier Pascalis ◽  
Francesca Simion

The current study aimed at investigating own- vs. other-species preferences in 3-month-old infants. The infants’ eye movements were recorded during a visual preference paradigm to assess whether they show a preference for own-species faces when contrasted with other-species faces. Human and monkey faces, equated for all low-level perceptual characteristics, were used. Our results demonstrated that 3-month-old infants preferred the human face, suggesting that the face perception system becomes species-specific after 3 months of visual experience with a specific class of faces. The eye tracking results are also showing that fixations were more focused on the eye area of human faces, supporting the notion of their importance in holding visual attention.


2021 ◽  
Vol 12 ◽  
Author(s):  
Nina Marsh ◽  
Dirk Scheele ◽  
Danilo Postin ◽  
Marc Onken ◽  
Rene Hurlemann

Visual attention directed towards the eye-region of a face emerges rapidly, even before conscious awareness, and regulates social interactions in terms of approach versus avoidance. Current perspectives on the neuroendocrine substrates of this behavioral regulation highlight a role of the peptide hormone oxytocin (OXT), but it remains unclear whether the facilitating effects of OXT vary as a function of facial familiarity. Here, a total of 73 healthy participants was enrolled in an eye-tracking experiment specifically designed to test whether intranasal OXT (24 IU) augments gaze duration toward the eye-region across four different face categories: the participants’ own face, the face of their romantic partner, the face of a familiar person (close friend) or an unfamiliar person (a stranger). We found that OXT treatment induced a tendency to spend more time looking into the eyes of familiar persons (partner and close friend) as compared to placebo. This effect was not evident in the self and unfamiliar conditions. Independent of treatment, volunteers scoring high on autistic-like traits (AQ-high) spent less time looking at the eyes of all faces except their partner. Collectively, our results show that the OXT system is involved in facilitating an attentional bias towards the eye region of familiar faces, which convey safety and support, especially in anxious contexts. In contrast, autistic-like traits were associated with reduced attention to the eye region of a face regardless of familiarity and OXT-treatment.


2018 ◽  
Vol 122 (4) ◽  
pp. 1432-1448 ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Anette Kersting ◽  
Thomas Suslow

Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.


2021 ◽  
pp. 095679762199666
Author(s):  
Sebastian Schindler ◽  
Maximilian Bruchmann ◽  
Claudia Krasowski ◽  
Robert Moeck ◽  
Thomas Straube

Our brains rapidly respond to human faces and can differentiate between many identities, retrieving rich semantic emotional-knowledge information. Studies provide a mixed picture of how such information affects event-related potentials (ERPs). We systematically examined the effect of feature-based attention on ERP modulations to briefly presented faces of individuals associated with a crime. The tasks required participants ( N = 40 adults) to discriminate the orientation of lines overlaid onto the face, the age of the face, or emotional information associated with the face. Negative faces amplified the N170 ERP component during all tasks, whereas the early posterior negativity (EPN) and late positive potential (LPP) components were increased only when the emotional information was attended to. These findings suggest that during early configural analyses (N170), evaluative information potentiates face processing regardless of feature-based attention. During intermediate, only partially resource-dependent, processing stages (EPN) and late stages of elaborate stimulus processing (LPP), attention to the acquired emotional information is necessary for amplified processing of negatively evaluated faces.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.


2022 ◽  
Vol 14 (1) ◽  
pp. 0-0

Attendance management can become a tedious task for teachers if it is performed manually.. This problem can be solved with the help of an automatic attendance management system. But validation is one of the main issues in the system. Generally, biometrics are used in the smart automatic attendance system. Managing attendance with the help of face recognition is one of the biometric methods with better efficiency as compared to others. Smart Attendance with the help of instant face recognition is a real-life solution that helps in handling daily life activities and maintaining a student attendance system. Face recognition-based attendance system uses face biometrics which is based on high resolution monitor video and other technologies to recognize the face of the student. In project, the system will be able to find and recognize human faces fast and accurately with the help of images or videos that will be captured through a surveillance camera. It will convert the frames of the video into images so that our system can easily search that image in the attendance database.


2020 ◽  
Author(s):  
Andrew Langbehn ◽  
Dasha Yermol ◽  
Fangyun Zhao ◽  
Christopher Thorstenson ◽  
Paula Niedenthal

Abstract According to the familiar axiom, the eyes are the window to the soul. However, wearing masks to prevent the spread of COVID-19 involves occluding a large portion of the face. Do the eyes carry all of the information we need to perceive each other’s emotions? We addressed this question in two studies. In the first, 162 Amazon Mechanical Turk (MTurk) workers saw videos of human faces displaying expressions of happiness, disgust, anger, and surprise that were fully visible or covered by N95, surgical, or cloth masks and rated the extent to which the expressions conveyed each of the four emotions. Across mask conditions, participants perceived significantly lower levels of the expressed (target) emotion and this was particularly true for expressions composed of greater facial action in the lower part of the faces. Furthermore, higher levels of other (non-target) emotions were perceived in masked compared to visible faces. In the second study, 60 MTurk workers rated the extent to which three types of smiles (reward, affiliation, and dominance smiles), either visible or masked, conveyed positive feelings, reassurance, and superiority. They reported that masked smiles communicated less of the target signal than visible faces, but not more of other possible signals. Political attitudes were not systematically associated with disruptions in the processing of facial expression caused by masking the face.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


PLoS ONE ◽  
2014 ◽  
Vol 9 (4) ◽  
pp. e93914 ◽  
Author(s):  
Jonathon R. Shasteen ◽  
Noah J. Sasson ◽  
Amy E. Pinkham
Keyword(s):  

Author(s):  
Pavan Narayana A ◽  
◽  
Janardhan Guptha S ◽  
Deepak S ◽  
Pujith Sai P ◽  
...  

January 27 2020, a day that will be remembered by the Indian people for a few decades, where a deadly virus peeped into a life of a young lady and till now it has been so threatening as it took up the life of 3.26 lakh people just in India. With the start of the virus government has made mandatory to wear masks when we go out in to crowded or public areas such as markets, malls, private gatherings and etc. So, it will be difficult for a person in the entrance to check whether everyone one are entering with a mask, in this paper we have designed a smart door face mask detection to check whether who are wearing or not wearing mask. By using different technologies such as Open CV, MTCNN, CNN, IFTTT, ThingSpeak we have designed this face mask detection. We use python to program the code. MTCNN using Viola- Jones algorithm detects the human faces present in the screen The Viola-Jones algorithm first detects the face on the grayscale image and then finds the location on the colored image. In this algorithm MTCNN first detects the face in grayscale image locates it and then finds this location on colored image. CNN for detecting masks in the human face is constructed using sample datasets and MobileNetV2 which acts as an object detector in our case the object is mask. ThingSpeak is an open-source Internet of things application used to display the information we get form the smart door. This deployed application can also detect when people are moving. So, with this face mask detection, as a part to stop the spread of the virus, we ensure that with this smart door we can prevent the virus from spreading and can regain our happy life.


Sign in / Sign up

Export Citation Format

Share Document