scholarly journals Analysis of observing and recognition profile facial images using eye tracking system

Author(s):  
Andrej Iskra ◽  
◽  
Helena Gabrijelčič Tomc ◽  

Facial images have been the subject of research for many years, using the eye-tracking system. However, most researchers concentrate on the frontal view of facial images. Much less research has been done on faces shown at different angles or profile views of faces in facial images. However, as we know, in reality we often view faces from different angles and not just from a frontal view. In our research we used a profile presentation of facial images and analyzed memory and recognition depending on the display time and dimensions of the facial images. Two tests were performed, i.e. the observation and the recognition test, and we used the well-known yes/no detection theory. We used four different display times in the observation test (1, 2, 4 and 8 seconds) and two different dimensions of facial images 640 × 480 and 1280 × 960). All facial images were taken from the standardized face database Minear&Park. We measured the recognition success which is mostly presented as a discrimination index A’, incorrect recognition (FA – false alarm) and time-spatial method based on fixation duration and saccade length. In this case, eye tracking provides us with objective results when viewing facial images. In the results it was found that extending the display time of facial images improves recognition performance and that the dependence is logarithmic. At the same time, wrong recognition decreased. Both parameters are independent of the dimensions of the facial images. This fact has been proven by some other researchers also for frontal facial images. It was also discovered that with an increase of the display time of facial images an increase of the fixation duration and saccade lengths occurred. In all results we detected major changes at the display time of four seconds, which we consider as a time, where the subjects looked at the whole face and their gaze returned to the center of the face (in our case eye and mouth).

Author(s):  
Andrej Iskra ◽  

Facial images are an important element of nonverbal communication. Eye-tracking systems enable us to objectively measure and analyse the way we look at facial images and thus to study the behaviour of observers. Different ways of looking at facial images influence the process of remembering faces and recognition performance. In the real world we are dealing with different representations of faces, especially when we look at them from different angles. Memory and recognition performance are different when test subjects look at the face from the frontal or from a profile view. We studied crossobservation and recognition, so we performed two tests. In the first test, subjects observed facial images shown in the frontal view and recognized them in the profile view. In the second test, the faces were observed from the profile and recognized in the frontal view. The presentation time in the observation test was four seconds, which was found to be an adequate time for sufficient recognition in some previous tests. The results were analysed with the well-known time and spatial method based on fixations and saccades and with the new area method using heatmaps of the eye tracking results. We found that the recognition success (correct and incorrect recognition) was better when the combination of frontal view and profile recognition was used. The results were then confirmed by measuring the fixation duration and saccade length. More visible facial features resulted in a shorter fixation duration and shorter saccade length, which led to a better memory. We also confirmed the results of observation and recognition by area analysis, where we measured the area, perimeter and circularity of heatmaps. Here we found that larger areas and perimeter and smaller circularity of heatmaps resulted in better memory of facial images and therefore better recognition.


2013 ◽  
Vol 6 (4) ◽  
Author(s):  
Banu Cangöz ◽  
Arif Altun ◽  
Petek Aşkar ◽  
Zeynel Baran ◽  
Sacide Güzin Mazman

The main objective of the study is to investigate the effects of age of model, gender of observer, and lateralization on visual screening patterns while looking at the emotional facial expressions. Data were collected through eye tracking methodology. The areas of interests were set to include eyes, nose and mouth. The selected eye metrics were first fixation duration, fixation duration and fixation count. Those eye tracking metrics were recorded for different emotional expressions (sad, happy, neutral), and conditions (the age of model, part of face and lateralization). The results revealed that participants looked at the older faces shorter in time and fixated their gaze less compared to the younger faces. This study also showed that when participants were asked to passively look at the face expressions, eyes were important areas in determining sadness and happiness, whereas eyes and noise were important in determining neutral expression. The longest fixated face area was on eyes for both young and old models. Lastly, hemispheric lateralization hypothesis regarding emotional face process was supported.


2002 ◽  
Vol 14 (6) ◽  
pp. 615-624
Author(s):  
Hiroshi Kobayashi ◽  
◽  
Kohki Kikuchi ◽  
Miyako Tazaki ◽  
Yoshibumi Nakane ◽  
...  

In response to the need for quantitative information in diagnosing psychiatric disorders, we have developed an automated interview, automated extraction of facial organs, and acquisition of quantitative diagnostic information. We tried to obtain quantitative data for diagnosis by analyzing facial expressions during automated interviews. We focus on the movement of the pupils and head and the correlation between them, i.e., we develop a method for automated measurement of time sequential movements for the position of the pupil in relation to the frontal view of the face. By calculating the correlation between them, we obtain quantitative information that enables us to diagnose, for example, whether a subject may be schizophrenically inclined.


Author(s):  
Paul A. Wetzel ◽  
Gretchen Krueger-Anderson ◽  
Christine Poprik ◽  
Peter Bascom

2009 ◽  
Vol 8 (3) ◽  
pp. 887-897
Author(s):  
Vishal Paika ◽  
Er. Pankaj Bhambri

The face is the feature which distinguishes a person. Facial appearance is vital for human recognition. It has certain features like forehead, skin, eyes, ears, nose, cheeks, mouth, lip, teeth etc which helps us, humans, to recognize a particular face from millions of faces even after a large span of time and despite large changes in their appearance due to ageing, expression, viewing conditions and distractions such as disfigurement of face, scars, beard or hair style. A face is not merely a set of facial features but is rather but is rather something meaningful in its form.In this paper, depending on the various facial features, a system is designed to recognize them. To reveal the outline of the face, eyes, ears, nose, teeth etc different edge detection techniques have been used. These features are extracted in the term of distance between important feature points. The feature set obtained is then normalized and are feed to artificial neural networks so as to train them for reorganization of facial images.


2010 ◽  
Vol 36 (8) ◽  
pp. 1051-1061 ◽  
Author(s):  
Chuang ZHANG ◽  
Jian-Nan CHI ◽  
Zhao-Hui ZHANG ◽  
Zhi-Liang WANG

Author(s):  
Federico Cassioli ◽  
Laura Angioletti ◽  
Michela Balconi

AbstractHuman–computer interaction (HCI) is particularly interesting because full-immersive technology may be approached differently by users, depending on the complexity of the interaction, users’ personality traits, and their motivational systems inclination. Therefore, this study investigated the relationship between psychological factors and attention towards specific tech-interactions in a smart home system (SHS). The relation between personal psychological traits and eye-tracking metrics is investigated through self-report measures [locus of control (LoC), user experience (UX), behavioral inhibition system (BIS) and behavioral activation system (BAS)] and a wearable and wireless near-infrared illumination based eye-tracking system applied to an Italian sample (n = 19). Participants were asked to activate and interact with five different tech-interaction areas with different levels of complexity (entrance, kitchen, living room, bathroom, and bedroom) in a smart home system (SHS), while their eye-gaze behavior was recorded. Data showed significant differences between a simpler interaction (entrance) and a more complex one (living room), in terms of number of fixation. Moreover, slower time to first fixation in a multifaceted interaction (bathroom), compared to simpler ones (kitchen and living room) was found. Additionally, in two interaction conditions (living room and bathroom), negative correlations were found between external LoC and fixation count, and between BAS reward responsiveness scores and fixation duration. Findings led to the identification of a two-way process, where both the complexity of the tech-interaction and subjects’ personality traits are important impacting factors on the user’s visual exploration behavior. This research contributes to understand the user responsiveness adding first insights that may help to create more human-centered technology.


Sign in / Sign up

Export Citation Format

Share Document