scholarly journals Sad people are more accurate at expression identification with a smaller own-ethnicity bias than happy people

2018 ◽  
Vol 71 (8) ◽  
pp. 1797-1806
Author(s):  
Peter J Hills ◽  
Dominic M Hill

Sad individuals are more accurate at face identity recognition, possibly because they scan more of the face during encoding. During expression identification tasks, sad individuals do not fixate on the eyes as much as happier individuals. Fixating on features other than the eyes leads to a reduced own-ethnicity bias. This background indicates that sad individuals would not view the eyes as much as happy individuals, and this would result in improved expression recognition and reduced own-ethnicity bias. This prediction was tested using an expression identification task with eye tracking. We demonstrate that sad-induced participants show enhanced expression recognition and a reduced own-ethnicity bias than happy-induced participants due to scanning more facial features. We conclude that mood affects eye movements and face encoding by causing a wider sampling strategy and deeper encoding of facial features diagnostic for expression identification.

2018 ◽  
Vol 122 (4) ◽  
pp. 1432-1448 ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Anette Kersting ◽  
Thomas Suslow

Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.


2021 ◽  
Author(s):  
Nicole X Han ◽  
Puneeth N. Chakravarthula ◽  
Miguel P. Eckstein

Face processing is a fast and efficient process due to its evolutionary and social importance. A majority of people direct their first eye movement to a featureless point just below the eyes that maximizes accuracy in recognizing a person's identity and gender. Yet, the exact properties or features of the face that guide the first eye movements and reduce fixational variability are unknown. Here, we manipulated the presence of the facial features and the spatial configuration of features to investigate their effect on the location and variability of first and second fixations to peripherally presented faces. Results showed that observers can utilize the face outline, individual facial features, and feature spatial configuration to guide the first eye movements to their preferred point of fixation. The eyes have a preferential role in guiding the first eye movements and reducing fixation variability. Eliminating the eyes or altering their position had the greatest influence on the location and variability of fixations and resulted in the largest detriment to face identification performance. The other internal features (nose and mouth) also contribute to reducing fixation variability. A subsequent experiment measuring detection of single features showed that the eyes have the highest detectability (relative to other features) in the visual periphery providing a strong sensory signal to guide the oculomotor system. Together, the results suggest a flexible multiple-cue approach that might be a robust solution to cope with how the varying eccentricities in the real world influence the ability to resolve individual feature properties and the preferential role of the eyes.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


2020 ◽  
Author(s):  
ASHUTOSH DHAMIJA ◽  
R.B DUBEY

Abstract Forage, face recognition is one of the most demanding field challenges, since aging affects the shape and structure of the face. Age invariant face recognition (AIFR) is a relatively new area in face recognition studies, which in real-world implementations recently gained considerable interest due to its huge potential and relevance. The AIFR, however, is still evolving and evolving, providing substantial potential for further study and progress inaccuracy. Major issues with the AIFR involve major variations in appearance, texture, and facial features and discrepancies in position and illumination. These problems restrict the AIFR systems developed and intensify identity recognition tasks. To address this problem, a new technique Quadratic Support Vector Machine- Principal Component Analysis (QSVM-PCA) is introduced. Experimental results suggest that our QSVM-PCA achieved better results especially when the age range is larger than other existing techniques of face-aging datasets of FGNET. The maximum accuracy achieved by demonstrated methodology is 98.87%.


2009 ◽  
Vol 276 (1664) ◽  
pp. 1949-1955 ◽  
Author(s):  
Fumihiro Kano ◽  
Masaki Tomonaga

Surprisingly little is known about the eye movements of chimpanzees, despite the potential contribution of such knowledge to comparative cognition studies. Here, we present the first examination of eye tracking in chimpanzees. We recorded the eye movements of chimpanzees as they viewed naturalistic pictures containing a full-body image of a chimpanzee, a human or another mammal; results were compared with those from humans. We found a striking similarity in viewing patterns between the two species. Both chimpanzees and humans looked at the animal figures for longer than at the background and at the face region for longer than at other parts of the body. The face region was detected at first sight by both species when they were shown pictures of chimpanzees and of humans. However, the eye movements of chimpanzees also exhibited distinct differences from those of humans; the former shifted the fixation location more quickly and more broadly than the latter. In addition, the average duration of fixation on the face region was shorter in chimpanzees than in humans. Overall, our results clearly demonstrate the eye-movement strategies common to the two primate species and also suggest several notable differences manifested during the observation of pictures of scenes and body forms.


2021 ◽  
pp. 196-219
Author(s):  
Galina Ya. Menshikova ◽  
Anna O. Pichugina

Background. The article is devoted to the study of the mechanisms of face perception when using the technology of eye-tracking. In the scientific literature, two processes are distinguished - analytical (perception of individual facial features) and holistic (perception of a general configuration of facial features). It is assumed that each of the mechanisms can be specifically manifested in patterns of eye movements during face perception. However, there is disagreement among the authors concerning the eye movements patterns which reflect the dominance of the holistic or analytic processing. We hypothesized that the contradictions in the interpretation of eye movement indicators in the studies of face perception may be associated with the features of the eye-tracker data processing, namely, with the specifics of identifying areas of interest (eyes, nose, bridge of the nose, lips), as well as with individual strategies of eye movements. Objective. Revealing the features of eye movements analysis in the process of facial perception. Method. A method for studying analytical and holistic processing in the task of assessing the attractiveness of upright and inverted faces using eye-tracking technology has been developed and tested. The eye-tracking data were analyzed for the entire sample using three types of processing, differing in the marking of the areas of interest (AOIs), and separately for two groups differing in eye movement strategies. The distinction of strategies was considered based on differences in the mean values of the fixation duration and the amplitude of saccades. Results. It was shown that: the presence of statistically significant differences of the dwell time in the AOIs between the condition of upright and inverted faces depended on the method of identifying these AOIs. It was shown that the distribution of the dwell time by zones is closely related to individual strategies of eye movements. Analysis of the data separately by groups showed significant differences in the distribution of the dwell time in the AOIs. Conclusion. When processing eye-tracking data obtained in the studies of face perception, it is necessary to consider individual strategies of eye movements, as well as the features associated with identifying AOIs. The absence of a single standard for identifying these areas can be the reason for inconsistency of the data about the holistic or analytical processing dominance. According to our data, the most effective for the analysis of holistic processing is a more detailed type of marking the AOIs, in which not only the main features (eyes, nose, mouth) are distinguished, but also the area of the nose bridge and nose.


2019 ◽  
Vol 44 (5) ◽  
pp. 469-478
Author(s):  
Nicholas E. Souter ◽  
Sudha Arunachalam ◽  
Rhiannon J. Luyster

Eye-tracking research on social attention in infants and toddlers has included heterogeneous stimuli and analysis techniques. This allows measurement of looking to inner facial features under diverse conditions but restricts across-study comparisons. Eye–mouth index (EMI) is a measure of relative preference for looking to the eyes or mouth, independent of time spent attending to the face. The current study assessed whether EMI was more robust to differences in stimulus type than percent dwell time (PDT) toward the eyes, mouth, and face. Participants were typically developing toddlers aged 18–30 months ( N = 58). Stimuli were dynamic videos with single and multiple actors. It was hypothesized that stimulus type would affect PDT to the face, eyes, and mouth, but not EMI. Generalized estimating equations demonstrated that all measures including EMI were influenced by stimulus type. Nevertheless, planned contrasts suggested that EMI was more robust than PDT when comparing heterogeneous stimuli. EMI may allow for a more robust comparison of social attention to inner facial features across eye-tracking studies.


Author(s):  
Bhumika Rajput

When the driver does not get proper sleep, rest or fell sleepy, they sleep while driving and it could be fatal to driver and even the passengers. This issue should have a solution in form of a system in which they can identify drowsiness on the face of a driver and then could ring an alarm so that driver can take necessary actions after that. The detection is done mainly in three steps, in beginning the system should identify the face and then facial features and then followed by eye tracking. In this we use correlation coefficient template. The extracted eye image and template is then matched so that the system can know if the driver is sleeping or not. The blinking is then recognized and if it fall within a certain range, the alarm will go off.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bin Yuan ◽  
Changqing Du ◽  
Zhongyuan Wang ◽  
Rong Zhu

Identity recognition is a research hotspot in the information age. Nowadays, more and more occasions require identity recognition, especially in smart home. Identity recognition of the head of the household can avoid many troubles, such as home identification and network information authentication. Nowadays, in smart home identification, especially based on face recognition, system authentication is basically through feature matching. Although this method is convenient and quick to use, it lacks intelligence. Nowadays, for the make-up, facelift, posture, and other differences, the accuracy of the system is greatly reduced. In this paper, the face recognition method is used for identity authentication. Firstly, the AdaBoost learning algorithm is used to construct the face detection and eye detection classifier to realize the detection and localization of the face and eyes. Secondly, the two-dimensional discrete wavelet transform is used to extract facial features and construct a personal face dynamic feature database. Finally, an improved elastic template matching algorithm is used to establish an intelligent classification method for dynamic face elasticity models. The simulation shows that the proposed method can intelligently adapt to various environments without reducing the accuracy.


2011 ◽  
Author(s):  
Lieke Curfs ◽  
Rob Holland ◽  
Jose Kerstholt ◽  
Daniel Wigboldus
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document