Neural Responses to Facial Expression and Face Identity in the Monkey Amygdala

2007 ◽  
Vol 97 (2) ◽  
pp. 1671-1683 ◽  
Author(s):  
K. M. Gothard ◽  
F. P. Battaglia ◽  
C. A. Erickson ◽  
K. M. Spitler ◽  
D. G. Amaral

The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.

2020 ◽  
Author(s):  
Andrew Langbehn ◽  
Dasha Yermol ◽  
Fangyun Zhao ◽  
Christopher Thorstenson ◽  
Paula Niedenthal

Abstract According to the familiar axiom, the eyes are the window to the soul. However, wearing masks to prevent the spread of COVID-19 involves occluding a large portion of the face. Do the eyes carry all of the information we need to perceive each other’s emotions? We addressed this question in two studies. In the first, 162 Amazon Mechanical Turk (MTurk) workers saw videos of human faces displaying expressions of happiness, disgust, anger, and surprise that were fully visible or covered by N95, surgical, or cloth masks and rated the extent to which the expressions conveyed each of the four emotions. Across mask conditions, participants perceived significantly lower levels of the expressed (target) emotion and this was particularly true for expressions composed of greater facial action in the lower part of the faces. Furthermore, higher levels of other (non-target) emotions were perceived in masked compared to visible faces. In the second study, 60 MTurk workers rated the extent to which three types of smiles (reward, affiliation, and dominance smiles), either visible or masked, conveyed positive feelings, reassurance, and superiority. They reported that masked smiles communicated less of the target signal than visible faces, but not more of other possible signals. Political attitudes were not systematically associated with disruptions in the processing of facial expression caused by masking the face.


Perception ◽  
10.1068/p6291 ◽  
2009 ◽  
Vol 38 (5) ◽  
pp. 702-707
Author(s):  
Robert A Johnston ◽  
Eleanor Tomlinson ◽  
Chris Jones ◽  
Alan Weaden

The face-processing skills of people with schizophrenia were compared with those of a group of unimpaired individuals. Participants were asked to make speeded face-classification decisions to faces previously rated as being typical or distinctive. The schizophrenic group responded more slowly than the unimpaired group; however, both groups demonstrated the customary sensitivity to the distinctiveness of the face stimuli. Face-classification latencies made to typical faces were shorter than those made to distinctive faces. The implication of this finding with the schizophrenic group is discussed with reference to accounts of face-processing deficits attributed to these individuals.


2018 ◽  
Author(s):  
Adrienne Wood ◽  
Jared Martin ◽  
Martha W. Alibali ◽  
Paula Niedenthal

Recognition of affect expressed in the face is disrupted when the body expresses an incongruent affect. Existing research has documented such interference for universally recognizable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N = 62) viewed videos of facial expressions accompanied by hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesized that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N = 127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but perceivers do not seem to rely more on gestures when sensorimotor face processing is disrupted.


2018 ◽  
Author(s):  
Ali Pournaghdali ◽  
Bennett L. Shwartz ◽  
Jason Scott Hays ◽  
Fabian Soto

The aim of this study was to explore if a conservative response criterion is responsible for non-conscious perception of facial expressions during continuous flash suppression (CFS). We hypothesized that participants’ sensitivity in a detection task would be significantly lower than their sensitivity in a 2-alternative forced-choice (2AFC) task while using CFS. After rendering images of faces with different facial expression (fearful vs. neutral) invisible for 500 milliseconds (ms) and 700 ms using CFS, participants judged the presence/absence of the faces with a detection task and the emotion of faces with a 2AFC task. When we rendered the face stimuli invisible for 500 ms, participants discriminated the signal (fearful face) from noise (neutral face) by having a higher sensitivity in the 2AFC task compared to the detection task. When we rendered the face stimuli invisible for 700 ms, the difference between participants’ sensitivities in the two tasks disappeared.


1998 ◽  
Vol 10 (5) ◽  
pp. 615-622 ◽  
Author(s):  
Lisa A. Parr ◽  
Tara Dove ◽  
William D. Hopkins

Five chimpanzees were tested on their ability to discriminate faces and automobiles presented in both their upright and inverted orientations. The face stimuli consisted of 30 black and white photographs, 10 each of unfamiliar chimpanzees (Pan troglodytes), brown capuchins (Cebus apella), and humans (Homo sapiens). Ten black and white photographs of automobiles were also used. The stimuli were presented in a sequential matching-to-sample (SMTS) format using a computerized joystick-testing apparatus. Subjects performed better on upright than inverted stimuli in all classes. Performance was significantly better on upright than inverted presentations of chimpanzee and human faces but not on capuchin monkey faces or automobiles. These data support previous studies in humans that suggest the inversion effect occurs for stimuli for which subjects have developed an expertise. Alternative explanations for the inversion effect based on the type of spatial frequency contained in the stimuli are also discussed. These data are the first to provide evidence for the inversion effect using several classes of face stimuli in a great ape species.


2019 ◽  
Vol 44 (2) ◽  
pp. 157-165
Author(s):  
Masahiro Hirai ◽  
Yukako Muramatsu ◽  
Miho Nakamura

Previous studies show that newborn infants and adults orient their attention preferentially toward human faces. However, the developmental changes of visual attention captured by face stimuli remain unclear, especially when an explicit top-down process is involved. We capitalized on a visual search paradigm to assess how the relative strength of visual attention captured by a non-target face stimulus and explicit attentional control on a target stimulus evolve as search progresses and how this process changes during development. Ninety children aged 5–14 years searched for a target within an array of distractors, which occasionally contained an upright face. To assess the precise picture of developmental changes, we measured: (1) manual responses, such as reaction time and accuracy; and (2) eye movements such as the location of the first fixation, which reflect the attentional profile at the initial stage, and looking times, which reflect the attentional profile at the later period of searching. Both reaction time and accuracy were affected by the presence of the target-unrelated face, though the interference effect was observed consistently across ages. However, developmental changes were captured by the first fixation proportion, suggesting that initial attention was preferentially directed towards the target-unrelated face before 6.9 years of age. Furthermore, prior to 12.8 years of age, the first fixation towards face stimuli was significantly more frequent than for object stimuli. In contrast, the looking time proportion for the face stimuli was significantly higher than that for the objects across all ages. These findings suggest that developmental changes do not influence the later search periods during a trial, but that they influence the initial orienting indexed by the first fixation. Moreover, the manual responses are tightly linked to eye movement behaviors.


Author(s):  
Narjes Soltani Dehaghani ◽  
◽  
Burkhard Maess ◽  
Reza Khosrowabadi ◽  
Mojtaba Zarei ◽  
...  

Faces can be speedily processed, although they convey an immense amount of information. Hence, in psychophysiological experiments, human faces constitute very special stimuli! Numerous studies have investigated the electrophysiological correlates of face processing, showing the existence of multiple event-related components. Nevertheless, dissimilarities in various levels of processing are still controversial. In this present study, we used magnetoencephalography (MEG) to examine how facial processing is different in perception and recognition from object processing and also determined 95% confidence interval for the onset and peak time of the effects we found. Our results confirm the face-selectivity for the M170 component, but not always for the M100 component. Additionally, we observed a unique speed pattern for the M170 component in perception and recognition both at the onset and the peak time.


Author(s):  
S. Mary Hima Preethi ◽  
P. Sobha ◽  
P. Rajalakshmi Kamalini ◽  
K. Gowri Raghavendra Narayan

People have consistently been able to perceive and recognize faces and their feelings. Presently PCs can do likewise. We propose a model which recognizes human faces and classifies the emotion on the face as happy, angry, sad, neutral, surprise, disgust or fear. It is developed utilizing a convolutional neural network(CNN) and involves various stages. All these are carried out using a dataset available on the Kaggle repository named fer2013. Precision and execution of the neural system can be assessed utilizing a confusion matrix. We applied cross-approval to decide the ideal hyper-parameters and assessed the presentation of the created models by looking at their training histories.


2019 ◽  
Author(s):  
Kelsey M. Tyssowski ◽  
Katherine C. Letai ◽  
Samuel D. Rendall ◽  
Anastasia Nizhnik ◽  
Jesse M. Gray

ABSTRACTDespite dynamic inputs, neuronal circuits maintain relatively stable firing rates over long periods. This maintenance of firing rate, or firing rate homeostasis, is likely mediated by homeostatic mechanisms such as synaptic scaling and regulation of intrinsic excitability. Because some of these homeostatic mechanisms depend on transcription of activity-regulated genes, including Arc and Homer1a, we hypothesized that activity-regulated transcription would be required for firing rate homeostasis. Surprisingly, however, we found that cultured mouse cortical neurons grown on multi-electrode arrays homeostatically adapt their firing rates to persistent pharmacological stimulation even when activity-regulated transcription is disrupted. Specifically, we observed firing rate homeostasis Arc knock-out neurons, as well as knock-out neurons lacking activity-regulated transcription factors, AP1 and SRF. Firing rate homeostasis also occurred normally during acute pharmacological blockade of transcription. Thus, firing rate homeostasis in response to increased neuronal activity can occur in the absence of neuronal-activity-regulated transcription.SIGNIFICANCE STATEMENTNeuronal circuits maintain relatively stable firing rates even in the face of dynamic circuit inputs. Understanding the molecular mechanisms that enable this firing rate homeostasis could potentially provide insight into neuronal diseases that present with an imbalance of excitation and inhibition. However, the molecular mechanisms underlying firing rate homeostasis are largely unknown.It has long been hypothesized that firing rate homeostasis relies upon neuronal activity-regulated transcription. For example, a 2012 review (PMID 22685679) proposed it, and a 2014 modeling approach established that transcription could theoretically both measure and control firing rate (PMID 24853940). Surprisingly, despite this prediction, we found that cortical neurons undergo firing rate homeostasis in the absence of activity-regulated transcription, indicating that firing rate homeostasis is controlled by non-transcriptional mechanisms.


Author(s):  
Lara Speer ◽  
Miriam Schuler ◽  
Julian Keil ◽  
James K. Moran ◽  
Pierre Pantazidis ◽  
...  

AbstractChild sexual abuse offences (CSOs) represent a severe ethical and socioeconomic burden for society. Juveniles with a sexual preference for prepubescent children (PP) commit a large percentage of CSOs, but have been widely neglected in neuroscience research. Aberrant neural responses to face stimuli have been observed in men with pedophilic interest. Thus far, it is unknown whether such aberrations exist already in PP. A passive face-viewing paradigm, including the presentation of child and adult faces, was deployed and high-density electroencephalography data were recorded. The study group comprised 25 PP and the control group involved 22 juveniles with age-adequate sexual preference. Attractiveness ratings and evoked brain responses were obtained for the face stimuli. An aberrant pattern of attractiveness ratings for child vs. adult faces was found in the PP group. Moreover, elevated occipital P1 amplitudes were observed for adult vs. child faces in both groups. At longer latency (340–426 ms), a stronger negative deflection to child vs. adult faces, which was source localized in higher visual, parietal and frontal regions, was specifically observed in the PP group. Our study provides evidence for enhanced neural processing of child face stimuli in PP, which might reflect elevated attention capture of face stimuli depicting members from the sexually preferred age group. This study expands our understanding of the neural foundations underlying sexual interest in prepubescent children and provides a promising path for the uncovering of objective biomarkers of sexual responsiveness to childlike body schemes in juveniles.


Sign in / Sign up

Export Citation Format

Share Document