scholarly journals The time course of face representations during perception and working memory maintenance

Author(s):  
Gi-Yeul Bae

Abstract Successful social communication requires accurate perception and maintenance of invariant (face identity) and variant (facial expression) aspects of faces. While numerous studies investigated how face identity and expression information is extracted from faces during perception, less is known about the temporal aspects of the face information during perception and working memory (WM) maintenance. To investigate how face identity and expression information evolve over time, I recorded EEG while participants were performing a face WM task where they remembered a face image and reported either the identity or the expression of the face image after a short delay. Using multivariate ERP decoding analyses, I found that the two types of information exhibited dissociable temporal dynamics: Whereas face identity was decoded better than facial expression during perception, facial expression was decoded better than face identity during WM maintenance. Follow-up analyses suggested that this temporal dissociation was driven by differential maintenance mechanisms: Face identity information was maintained in a more ‘activity-silent’ manner compared to facial expression information, presumably because invariant face information does not need to be actively tracked in the task. Together, these results provide important insights into the temporal evolution of face information during perception and WM maintenance.

2007 ◽  
Vol 97 (2) ◽  
pp. 1671-1683 ◽  
Author(s):  
K. M. Gothard ◽  
F. P. Battaglia ◽  
C. A. Erickson ◽  
K. M. Spitler ◽  
D. G. Amaral

The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.


Algorithms ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 227 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. Hence, facial expression recognition is still a challenging problem. In most papers, the entire face image is often chosen as the input information. In our daily life, people can perceive other’s current emotions only by several facial components (such as eye, mouth and nose), and other areas of the face (such as hair, skin tone, ears, etc.) play a smaller role in determining one’s emotion. If the entire face image is used as the only input information, the system will produce some unnecessary information and miss some important information in the process of feature extraction. To solve the above problem, this paper proposes a method that combines multiple sub-regions and the entire face image by weighting, which can capture more important feature information that is conducive to improving the recognition accuracy. Our proposed method was evaluated based on four well-known publicly available facial expression databases: JAFFE, CK+, FER2013 and SFEW. The new method showed better performance than most state-of-the-art methods.


2014 ◽  
Vol 543-547 ◽  
pp. 2702-2705
Author(s):  
Hong Hai Liu ◽  
Xiang Hua Hou

In face image with complex background, the CbCr skin color region will have offset when considering the illumination change. Therefore, the non-skin color pixels which luminance is less than 80 will be mistaken as skin color pixels and the skin color pixels which luminance is greater than 230 will be mistaken as non-skin color pixels. In order to reduce the misjudgments, an improved skin color model of nonlinear piecewise is put forward in this paper. Firstly, the skin color model of non-piecewise is analyzed and the experimental results show that by this model there is an obvious misjudgment in face detection. Then the skin color model of nonlinear piecewise is mainly analyzed and is demonstrated by mathematics method. A large number of training results show that the skin color model of nonlinear piecewise has better clustering distribution than the skin color model of non-piecewise. At lastly, the face detection algorithm adopting skin color model of nonlinear piecewise is analyzed. The results show that this algorithm is better than the algorithm adopting skin color model of non-piecewise and it makes a good foundation for the application of face image.


2018 ◽  
Vol 6 (1) ◽  
pp. 54-70 ◽  
Author(s):  
Miriam Ruess ◽  
Roland Thomaschke ◽  
Andrea Kiesel

Stimuli elicited by one’s own actions (i.e., effects) are perceived as temporally earlier compared to stimuli not elicited by one’s own actions. This phenomenon is referred to as intentional binding (IB), and is commonly used as an implicit measure of sense of agency. Typically, IB is investigated by employing the so-called clock paradigm, in which participants are instructed to press a key (i.e., perform an action), which is followed by a tone (i.e., an effect), while presented with a rotating clock hand. Participants are then asked to estimate the position of the clock hand at tone onset. This time point estimate is compared to a baseline estimate where the tone is presented without any preceding action. In the present study, we investigated IB for effects occurring after relatively long delay durations (500 ms, 650 ms, 800 ms), while manipulating the temporal predictability of the delay duration. We observed an increase of IB for longer delay durations, whereas the temporal predictability did not significantly influence the magnitude of IB. This extends previous findings obtained with the clock paradigm, which have shown an increase of IB for very short delay ranges (<250 ms), but a decrease for intermediate delay ranges up to delay durations of 650 ms. Our findings, thus, indicate rather complex temporal dynamics of IB that might look similar to a wave-shaped function.


2022 ◽  
Vol 13 ◽  
Author(s):  
Chiara F. Tagliabue ◽  
Greta Varesio ◽  
Veronica Mazza

Electroencephalography (EEG) studies investigating visuo-spatial working memory (vWM) in aging typically adopt an event-related potential (ERP) analysis approach that has shed light on the age-related changes during item retention and retrieval. However, this approach does not fully enable a detailed description of the time course of the neural dynamics related to aging. The most frequent age-related changes in brain activity have been described by two influential models of neurocognitive aging, the Hemispheric Asymmetry Reduction in Older Adults (HAROLD) and the Posterior-Anterior Shift in Aging (PASA). These models posit that older adults tend to recruit additional brain areas (bilateral as predicted by HAROLD and anterior as predicted by PASA) when performing several cognitive tasks. We tested younger (N = 36) and older adults (N = 35) in a typical vWM task (delayed match-to-sample) where participants have to retain items and then compare them to a sample. Through a data-driven whole scalp EEG analysis we aimed at characterizing the temporal dynamics of the age-related activations predicted by the two models, both across and within different stages of stimulus processing. Behaviorally, younger outperformed older adults. The EEG analysis showed that older adults engaged supplementary bilateral posterior and frontal sites when processing different levels of memory load, in line with both HAROLD and PASA-like activations. Interestingly, these age-related supplementary activations dynamically developed over time. Indeed, they varied across different stages of stimulus processing, with HAROLD-like modulations being mainly present during item retention, and PASA-like activity during both retention and retrieval. Overall, the present results suggest that age-related neural changes are not a phenomenon indiscriminately present throughout all levels of cognitive processing.


1998 ◽  
Vol 9 (4) ◽  
pp. 270-276 ◽  
Author(s):  
Kari Edwards

Results of studies reported here indicate that humans are attuned to temporal cues in facial expressions of emotion. The experimental task required subjects to reproduce the actual progression of a target person's spontaneous expression (i.e., onset to offset) from a scrambled set of photographs. Each photograph depicted a segment of the expression that corresponded to approximately 67 ms in real time. Results of two experiments indicated that (a) individuals could detect extremely subtle dynamic cues in a facial expression and could utilize these cues to reproduce the proper temporal progression of the display at above-chance levels of accuracy; (b) women performed significantly better than men on the task designed to assess this ability; (c) individuals were most sensitive to the temporal characteristics of the early stages of an expression; and (d) accuracy was inversely related to the amount of time allotted for the task. The latter finding may reflect the relative involvement of (error-prone) cognitively mediated or strategic processes in what is normally a relatively automatic, nonconscious process.


2018 ◽  
Vol 11 (2) ◽  
pp. 16-33 ◽  
Author(s):  
A.V. Zhegallo

The study investigates the specifics of recognition of emotional facial expressions in peripherally exposed facial expressions, while exposition time was shorter compared to the duration of the latent period of a saccade towards the exposed image. The study showed that recognition of peripherical perception reproduces the patterns of the choice of the incorrect responses. The mutual mistaken recognition is common for the facial expressions of a fear, anger and surprise. In the case of worsening of the conditions of recognition, calmness and grief as facial expression were included in the complex of a mutually mistakenly identified expressions. The identification of the expression of happiness deserves a special attention, because it can be mistakenly identified as different facial expression, but other expressions are never recognized as happiness. Individual accuracy of recognition varies from 0.29 to 0.80. The sufficient condition of a high accuracy in recognition was the recognition of the facial expressions using peripherical vision without making a saccade in the direction of the face image exposed.


2014 ◽  
Vol 57 (2) ◽  
pp. 556-565 ◽  
Author(s):  
Nancy Tye-Murray ◽  
Sandra Hale ◽  
Brent Spehar ◽  
Joel Myerson ◽  
Mitchell S. Sommers

Purpose The study addressed three research questions: Does lipreading improve between the ages of 7 and 14 years? Does hearing loss affect the development of lipreading? How do individual differences in lipreading relate to other abilities? Method Forty children with normal hearing (NH) and 24 with hearing loss (HL) were tested using 4 lipreading instruments plus measures of perceptual, cognitive, and linguistic abilities. Results For both groups, lipreading performance improved with age on all 4 measures of lipreading, with the HL group performing better than the NH group. Scores from the 4 measures loaded strongly on a single principal component. Only age, hearing status, and visuospatial working memory were significant predictors of lipreading performance. Conclusions Results showed that children's lipreading ability is not fixed but rather improves between 7 and 14 years of age. The finding that children with HL lipread better than those with NH suggests experience plays an important role in the development of this ability. In addition to age and hearing status, visuospatial working memory predicts lipreading performance in children, just as it does in adults. Future research on the developmental time-course of lipreading could permit interventions and pedagogies to be targeted at periods in which improvement is most likely to occur.


2013 ◽  
Vol 13 (9) ◽  
pp. 416-416
Author(s):  
G. Kiani ◽  
J. Davies-Thompson ◽  
J. J. S. Barton

2016 ◽  
Vol 30 (4) ◽  
pp. 141-154 ◽  
Author(s):  
Kira Bailey ◽  
Gregory Mlynarczyk ◽  
Robert West

Abstract. Working memory supports our ability to maintain goal-relevant information that guides cognition in the face of distraction or competing tasks. The N-back task has been widely used in cognitive neuroscience to examine the functional neuroanatomy of working memory. Fewer studies have capitalized on the temporal resolution of event-related brain potentials (ERPs) to examine the time course of neural activity in the N-back task. The primary goal of the current study was to characterize slow wave activity observed in the response-to-stimulus interval in the N-back task that may be related to maintenance of information between trials in the task. In three experiments, we examined the effects of N-back load, interference, and response accuracy on the amplitude of the P3b following stimulus onset and slow wave activity elicited in the response-to-stimulus interval. Consistent with previous research, the amplitude of the P3b decreased as N-back load increased. Slow wave activity over the frontal and posterior regions of the scalp was sensitive to N-back load and was insensitive to interference or response accuracy. Together these findings lead to the suggestion that slow wave activity observed in the response-to-stimulus interval is related to the maintenance of information between trials in the 1-back task.


Sign in / Sign up

Export Citation Format

Share Document