Developmental changes in orienting towards faces: A behavioral and eye-tracking study

2019 ◽  
Vol 44 (2) ◽  
pp. 157-165
Author(s):  
Masahiro Hirai ◽  
Yukako Muramatsu ◽  
Miho Nakamura

Previous studies show that newborn infants and adults orient their attention preferentially toward human faces. However, the developmental changes of visual attention captured by face stimuli remain unclear, especially when an explicit top-down process is involved. We capitalized on a visual search paradigm to assess how the relative strength of visual attention captured by a non-target face stimulus and explicit attentional control on a target stimulus evolve as search progresses and how this process changes during development. Ninety children aged 5–14 years searched for a target within an array of distractors, which occasionally contained an upright face. To assess the precise picture of developmental changes, we measured: (1) manual responses, such as reaction time and accuracy; and (2) eye movements such as the location of the first fixation, which reflect the attentional profile at the initial stage, and looking times, which reflect the attentional profile at the later period of searching. Both reaction time and accuracy were affected by the presence of the target-unrelated face, though the interference effect was observed consistently across ages. However, developmental changes were captured by the first fixation proportion, suggesting that initial attention was preferentially directed towards the target-unrelated face before 6.9 years of age. Furthermore, prior to 12.8 years of age, the first fixation towards face stimuli was significantly more frequent than for object stimuli. In contrast, the looking time proportion for the face stimuli was significantly higher than that for the objects across all ages. These findings suggest that developmental changes do not influence the later search periods during a trial, but that they influence the initial orienting indexed by the first fixation. Moreover, the manual responses are tightly linked to eye movement behaviors.

2007 ◽  
Vol 97 (2) ◽  
pp. 1671-1683 ◽  
Author(s):  
K. M. Gothard ◽  
F. P. Battaglia ◽  
C. A. Erickson ◽  
K. M. Spitler ◽  
D. G. Amaral

The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.


1998 ◽  
Vol 10 (5) ◽  
pp. 615-622 ◽  
Author(s):  
Lisa A. Parr ◽  
Tara Dove ◽  
William D. Hopkins

Five chimpanzees were tested on their ability to discriminate faces and automobiles presented in both their upright and inverted orientations. The face stimuli consisted of 30 black and white photographs, 10 each of unfamiliar chimpanzees (Pan troglodytes), brown capuchins (Cebus apella), and humans (Homo sapiens). Ten black and white photographs of automobiles were also used. The stimuli were presented in a sequential matching-to-sample (SMTS) format using a computerized joystick-testing apparatus. Subjects performed better on upright than inverted stimuli in all classes. Performance was significantly better on upright than inverted presentations of chimpanzee and human faces but not on capuchin monkey faces or automobiles. These data support previous studies in humans that suggest the inversion effect occurs for stimuli for which subjects have developed an expertise. Alternative explanations for the inversion effect based on the type of spatial frequency contained in the stimuli are also discussed. These data are the first to provide evidence for the inversion effect using several classes of face stimuli in a great ape species.


1991 ◽  
Vol 3 (4) ◽  
pp. 322-328 ◽  
Author(s):  
Robert Rafal ◽  
Avishai Henik ◽  
Jean Smith

Evidence is presented that the phylogenetically older retin-otectal pathway contributes to reflex orienting of visual attention in normal human subjects. The study exploited a lateralized neuroanatomic arrangement of retinotectal pathways that distinguishes them from those of the geniculostriate system; namely, more direct projections to the colliculus from the temporal hemifield. Subjects were tested under monocular viewing conditions and responded to the detection of a peripheral signal by making either a saccade to it or a choice reaction time manual keypress. Attention was summoned by noninformative peripheral precues, and the benefits and costs of attention were calculated relative to a central precue condition. Both the benefits and costs of orienting attention were greater when attention was summoned by signals in the temporal hemifield. This temporal hemifield advantage was present for both saccade and manual responses. These findings converge with observations in patients with occipital and midbrain lesions to show that the phylogenetically older retinotectal pathway retains an important role in controlling visually guided behavior; and they demonstrate the usefulness of temporal-nasal hemifield asymmetries as a marker for investigating extrageniculate vision in humans.


2021 ◽  
Vol 15 ◽  
Author(s):  
Teresa Sollfrank ◽  
Oona Kohnen ◽  
Peter Hilfiker ◽  
Lorena C. Kegel ◽  
Hennric Jokeit ◽  
...  

This study aimed to examine whether the cortical processing of emotional faces is modulated by the computerization of face stimuli (”avatars”) in a group of 25 healthy participants. Subjects were passively viewing 128 static and dynamic facial expressions of female and male actors and their respective avatars in neutral or fearful conditions. Event-related potentials (ERPs), as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS), were derived from the EEG that was recorded during the task. All ERP features, except for the very early N100, differed in their response to avatar and actor faces. Whereas the N170 showed differences only for the neutral avatar condition, later potentials (N300 and LPP) differed in both emotional conditions (neutral and fear) and the presented agents (actor and avatar). In addition, we found that the avatar faces elicited significantly stronger reactions than the actor face for theta and alpha oscillations. Especially theta EEG frequencies responded specifically to visual emotional stimulation and were revealed to be sensitive to the emotional content of the face, whereas alpha frequency was modulated by all the stimulus types. We can conclude that the computerized avatar faces affect both, ERP components and ERD/ERS and evoke neural effects that are different from the ones elicited by real faces. This was true, although the avatars were replicas of the human faces and contained similar characteristics in their expression.


2002 ◽  
Vol 14 (2) ◽  
pp. 199-209 ◽  
Author(s):  
Michelle de Haan ◽  
Olivier Pascalis ◽  
Mark H. Johnson

Newborn infants respond preferentially to simple face-like patterns, raising the possibility that the face-specific regions identified in the adult cortex are functioning from birth. We sought to evaluate this hypothesis by characterizing the specificity of infants' electrocortical responses to faces in two ways: (1) comparing responses to faces of humans with those to faces of nonhuman primates; and 2) comparing responses to upright and inverted faces. Adults' face-responsive N170 event-related potential (ERP) component showed specificity to upright human faces that was not observable at any point in the ERPs of infants. A putative “infant N170” did show sensitivity to the species of the face, but the orientation of the face did not influence processing until a later stage. These findings suggest a process of gradual specialization of cortical face processing systems during postnatal development.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2021 ◽  
pp. 095679762199666
Author(s):  
Sebastian Schindler ◽  
Maximilian Bruchmann ◽  
Claudia Krasowski ◽  
Robert Moeck ◽  
Thomas Straube

Our brains rapidly respond to human faces and can differentiate between many identities, retrieving rich semantic emotional-knowledge information. Studies provide a mixed picture of how such information affects event-related potentials (ERPs). We systematically examined the effect of feature-based attention on ERP modulations to briefly presented faces of individuals associated with a crime. The tasks required participants ( N = 40 adults) to discriminate the orientation of lines overlaid onto the face, the age of the face, or emotional information associated with the face. Negative faces amplified the N170 ERP component during all tasks, whereas the early posterior negativity (EPN) and late positive potential (LPP) components were increased only when the emotional information was attended to. These findings suggest that during early configural analyses (N170), evaluative information potentiates face processing regardless of feature-based attention. During intermediate, only partially resource-dependent, processing stages (EPN) and late stages of elaborate stimulus processing (LPP), attention to the acquired emotional information is necessary for amplified processing of negatively evaluated faces.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.


2022 ◽  
Vol 14 (1) ◽  
pp. 0-0

Attendance management can become a tedious task for teachers if it is performed manually.. This problem can be solved with the help of an automatic attendance management system. But validation is one of the main issues in the system. Generally, biometrics are used in the smart automatic attendance system. Managing attendance with the help of face recognition is one of the biometric methods with better efficiency as compared to others. Smart Attendance with the help of instant face recognition is a real-life solution that helps in handling daily life activities and maintaining a student attendance system. Face recognition-based attendance system uses face biometrics which is based on high resolution monitor video and other technologies to recognize the face of the student. In project, the system will be able to find and recognize human faces fast and accurately with the help of images or videos that will be captured through a surveillance camera. It will convert the frames of the video into images so that our system can easily search that image in the attendance database.


Sign in / Sign up

Export Citation Format

Share Document