scholarly journals Eye tracking study of frontal and profile face image observation and recognition

Author(s):  
Andrej Iskra ◽  

Facial images are an important element of nonverbal communication. Eye-tracking systems enable us to objectively measure and analyse the way we look at facial images and thus to study the behaviour of observers. Different ways of looking at facial images influence the process of remembering faces and recognition performance. In the real world we are dealing with different representations of faces, especially when we look at them from different angles. Memory and recognition performance are different when test subjects look at the face from the frontal or from a profile view. We studied crossobservation and recognition, so we performed two tests. In the first test, subjects observed facial images shown in the frontal view and recognized them in the profile view. In the second test, the faces were observed from the profile and recognized in the frontal view. The presentation time in the observation test was four seconds, which was found to be an adequate time for sufficient recognition in some previous tests. The results were analysed with the well-known time and spatial method based on fixations and saccades and with the new area method using heatmaps of the eye tracking results. We found that the recognition success (correct and incorrect recognition) was better when the combination of frontal view and profile recognition was used. The results were then confirmed by measuring the fixation duration and saccade length. More visible facial features resulted in a shorter fixation duration and shorter saccade length, which led to a better memory. We also confirmed the results of observation and recognition by area analysis, where we measured the area, perimeter and circularity of heatmaps. Here we found that larger areas and perimeter and smaller circularity of heatmaps resulted in better memory of facial images and therefore better recognition.

Author(s):  
Andrej Iskra ◽  
◽  
Helena Gabrijelčič Tomc ◽  

Facial images have been the subject of research for many years, using the eye-tracking system. However, most researchers concentrate on the frontal view of facial images. Much less research has been done on faces shown at different angles or profile views of faces in facial images. However, as we know, in reality we often view faces from different angles and not just from a frontal view. In our research we used a profile presentation of facial images and analyzed memory and recognition depending on the display time and dimensions of the facial images. Two tests were performed, i.e. the observation and the recognition test, and we used the well-known yes/no detection theory. We used four different display times in the observation test (1, 2, 4 and 8 seconds) and two different dimensions of facial images 640 × 480 and 1280 × 960). All facial images were taken from the standardized face database Minear&Park. We measured the recognition success which is mostly presented as a discrimination index A’, incorrect recognition (FA – false alarm) and time-spatial method based on fixation duration and saccade length. In this case, eye tracking provides us with objective results when viewing facial images. In the results it was found that extending the display time of facial images improves recognition performance and that the dependence is logarithmic. At the same time, wrong recognition decreased. Both parameters are independent of the dimensions of the facial images. This fact has been proven by some other researchers also for frontal facial images. It was also discovered that with an increase of the display time of facial images an increase of the fixation duration and saccade lengths occurred. In all results we detected major changes at the display time of four seconds, which we consider as a time, where the subjects looked at the whole face and their gaze returned to the center of the face (in our case eye and mouth).


2013 ◽  
Vol 6 (4) ◽  
Author(s):  
Banu Cangöz ◽  
Arif Altun ◽  
Petek Aşkar ◽  
Zeynel Baran ◽  
Sacide Güzin Mazman

The main objective of the study is to investigate the effects of age of model, gender of observer, and lateralization on visual screening patterns while looking at the emotional facial expressions. Data were collected through eye tracking methodology. The areas of interests were set to include eyes, nose and mouth. The selected eye metrics were first fixation duration, fixation duration and fixation count. Those eye tracking metrics were recorded for different emotional expressions (sad, happy, neutral), and conditions (the age of model, part of face and lateralization). The results revealed that participants looked at the older faces shorter in time and fixated their gaze less compared to the younger faces. This study also showed that when participants were asked to passively look at the face expressions, eyes were important areas in determining sadness and happiness, whereas eyes and noise were important in determining neutral expression. The longest fixated face area was on eyes for both young and old models. Lastly, hemispheric lateralization hypothesis regarding emotional face process was supported.


2002 ◽  
Vol 14 (6) ◽  
pp. 615-624
Author(s):  
Hiroshi Kobayashi ◽  
◽  
Kohki Kikuchi ◽  
Miyako Tazaki ◽  
Yoshibumi Nakane ◽  
...  

In response to the need for quantitative information in diagnosing psychiatric disorders, we have developed an automated interview, automated extraction of facial organs, and acquisition of quantitative diagnostic information. We tried to obtain quantitative data for diagnosis by analyzing facial expressions during automated interviews. We focus on the movement of the pupils and head and the correlation between them, i.e., we develop a method for automated measurement of time sequential movements for the position of the pupil in relation to the frontal view of the face. By calculating the correlation between them, we obtain quantitative information that enables us to diagnose, for example, whether a subject may be schizophrenically inclined.


2009 ◽  
Vol 8 (3) ◽  
pp. 887-897
Author(s):  
Vishal Paika ◽  
Er. Pankaj Bhambri

The face is the feature which distinguishes a person. Facial appearance is vital for human recognition. It has certain features like forehead, skin, eyes, ears, nose, cheeks, mouth, lip, teeth etc which helps us, humans, to recognize a particular face from millions of faces even after a large span of time and despite large changes in their appearance due to ageing, expression, viewing conditions and distractions such as disfigurement of face, scars, beard or hair style. A face is not merely a set of facial features but is rather but is rather something meaningful in its form.In this paper, depending on the various facial features, a system is designed to recognize them. To reveal the outline of the face, eyes, ears, nose, teeth etc different edge detection techniques have been used. These features are extracted in the term of distance between important feature points. The feature set obtained is then normalized and are feed to artificial neural networks so as to train them for reorganization of facial images.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.


2018 ◽  
Vol 36 (6) ◽  
pp. 1027-1042 ◽  
Author(s):  
Quan Lu ◽  
Jiyue Zhang ◽  
Jing Chen ◽  
Ji Li

Purpose This paper aims to examine the effect of domain knowledge on eye-tracking measures and predict readers’ domain knowledge from these measures in a navigational table of contents (N-TOC) system. Design/methodology/approach A controlled experiment of three reading tasks was conducted in an N-TOC system for 24 postgraduates of Wuhan University. Data including fixation duration, fixation count and inter-scanning transitions were collected and calculated. Participants’ domain knowledge was measured by pre-experiment questionnaires. Logistic regression analysis was leveraged to build the prediction model and the model’s performance was evaluated based on baseline model. Findings The results showed that novices spent significantly more time in fixating on text area than experts, because of the difficulty of understanding the information of text area. Total fixation duration on text area (TFD_T) was a significantly negative predictor of domain knowledge. The prediction performance of logistic regression model using eye-tracking measures was better than baseline model, with the accuracy, precision and F(β = 1) scores to be 0.71, 0.86, 0.79. Originality/value Little research has been reported in literature on investigation of domain knowledge effect on eye-tracking measures during reading and prediction of domain knowledge based on eye-tracking measures. Most studies focus on multimedia learning. With respect to the prediction of domain knowledge, only some studies are found in the field of information search. This paper makes a good contribution to the literature on the effect of domain knowledge on eye-tracking measures during N-TOC reading and predicting domain knowledge.


2018 ◽  
Vol 9 (1) ◽  
pp. 60-77 ◽  
Author(s):  
Souhir Sghaier ◽  
Wajdi Farhat ◽  
Chokri Souani

This manuscript presents an improved system research that can detect and recognize the person in 3D space automatically and without the interaction of the people's faces. This system is based not only on a quantum computation and measurements to extract the vector features in the phase of characterization but also on learning algorithm (using SVM) to classify and recognize the person. This research presents an improved technique for automatic 3D face recognition using anthropometric proportions and measurement to detect and extract the area of interest which is unaffected by facial expression. This approach is able to treat incomplete and noisy images and reject the non-facial areas automatically. Moreover, it can deal with the presence of holes in the meshed and textured 3D image. It is also stable against small translation and rotation of the face. All the experimental tests have been done with two 3D face datasets FRAV 3D and GAVAB. Therefore, the test's results of the proposed approach are promising because they showed that it is competitive comparable to similar approaches in terms of accuracy, robustness, and flexibility. It achieves a high recognition performance rate of 95.35% for faces with neutral and non-neutral expressions for the identification and 98.36% for the authentification with GAVAB and 100% with some gallery of FRAV 3D datasets.


Now a days one of the critical factors that affects the recognition performance of any face recognition system is partial occlusion. The paper addresses face recognition in the presence of sunglasses and scarf occlusion. The face recognition approach that we proposed, detects the face region that is not occluded and then uses this region to obtain the face recognition. To segment the occluded and non-occluded parts, adaptive Fuzzy C-Means Clustering is used and for recognition Minimum Cost Sub-Block Matching Distance(MCSBMD) are used. The input face image is divided in to number of sub blocks and each block is checked if occlusion present or not and only from non-occluded blocks MWLBP features are extracted and are used for classification. Experiment results shows our method is giving promising results when compared to the other conventional techniques.


2013 ◽  
Vol 6 ◽  
pp. 56-67 ◽  
Author(s):  
Courtney K. Hsing ◽  
Alicia J. HofelichMohr ◽  
R. Brent Stansfield ◽  
Stephanie D. Preston

Alexithymia is a multifaceted personality construct related to deficits in the recognition and verbalization of emotions. It is uncertain what causes alexithymia or which stage of emotion processing is first affected. The current study was designed to determine if trait alexithymia was associated with impaired early semantic decoding of facial emotion. Participants performed the Emostroop task, which varied the presentation time of faces depicting neutral, angry, or sad expressions before the classification of angry or sad adjectives. The Emostroop effect was replicated, represented by slowed responses when the classified word was incongruent with the background facial emotion. Individuals with high alexithymia were slower overall across all trials, particularly when classifying sad adjectives; however, they did not differ on the basic Emostroop effect. Our results suggest that alexithymia does not stem from lower-level problems detecting and categorizing others’ facial emotions. Moreover, their impairment does not appear to extend uniformly across negative emotions and is not specific to angry or threatening stimuli as previously reported, at least during early processing. Almost in contrast to the expected impairment, individuals with high alexithymia and lower verbal IQ scores had even more pronounced Emostroop effects, especially when the face was displayed longer.To better understand the nature of alexithymia, future research needs to further disentangle the precise phase of emotion processing and forms of affect most affected in this relatively common condition


Sign in / Sign up

Export Citation Format

Share Document