scholarly journals The effects of spatial frequency on the decoding of emotional facial expressions

2021 ◽  
Author(s):  
Jordan Wylie

The accurate detection of emotion is critical to effectively navigating our social lives. However, it is not clear how distinct types of visual information afford the accurate perception of others’ emotion states. Here, we sought to examine the influence of different spatial frequency visual information on emotion categorization, and whether distinctive emotional dimensions (valence and arousal) are differentially influenced by specific spatial frequency content. Across one pilot and two experiments (N = 603), we tested whether emotional facial expressions that vary in valence, arousal, and motivational direction differ in accuracy of categorization as a function of low, intact, and high spatial frequency band information. Overall, we found a general decrease in the breadth of emotional expressions for filtered images but did not see a decrease in accuracy of categorization for the positive emotion, joy. Together, these results suggest that spatial frequency information influences perception of emotional expressions that differ in valence, arousal, and motivational direction.

2010 ◽  
Vol 21 (7) ◽  
pp. 901-907 ◽  
Author(s):  
Maital Neta ◽  
Paul J. Whalen

Low-spatial-frequency (LSF) visual information is processed in an elemental fashion before a finer analysis of high-spatial-frequency information. Further, the amygdala is particularly responsive to LSF information contained within negative (e.g., fearful) facial expressions. In a separate line of research, it has been shown that surprised facial expressions are ambiguous in that they can be interpreted as either negatively or positively valenced. More negative interpretations of surprise are associated with increased ventral amygdala activity. In this report, we show that LSF presentations of surprised expressions bias the interpretation of surprised expressions in a negative direction, a finding suggesting that negative interpretations are first and fast during the resolution of ambiguous valence. We also examined the influence of subjects’ positivity-negativity bias on this effect.


Author(s):  
Shozo Tobimatsu

There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways. The former has excellent spatial resolution with color selectivity, while the latter shows excellent temporal resolution with high contrast sensitivity. Visual stimuli should be tailored to answer specific clinical and/or research questions. This chapter examines the neural mechanisms of face perception using event-related potentials (ERPs). Face stimuli of different spatial frequencies were used to investigate how low-spatial-frequency (LSF) and high-spatial-frequency (HSF) components of the face contribute to the identification and recognition of the face and facial expressions. The P100 component in the occipital area (Oz), the N170 in the posterior temporal region (T5/T6) and late components peaking at 270-390 ms (T5/T6) were analyzed. LSF enhanced P100, while N170 was augmented by HSF irrespective of facial expressions. This suggested that LSF is important for global processing of facial expressions, whereas HSF handles featural processing. There were significant amplitude differences between positive and negative LSF facial expressions in the early time windows of 270-310 ms. Subsequently, the amplitudes among negative HSF facial expressions differed significantly in the later time windows of 330–390 ms. Discrimination between positive and negative facial expressions precedes discrimination among different negative expressions in a sequential manner based on parallel visual channels. Interestingly, patients with schizophrenia showed decreased spatial frequency sensitivities for face processing. Taken together, the spatially filtered face images are useful for exploring face perception and recognition.


2007 ◽  
Vol 38 (10) ◽  
pp. 1475-1483 ◽  
Author(s):  
K. S. Kendler ◽  
L. J. Halberstadt ◽  
F. Butera ◽  
J. Myers ◽  
T. Bouchard ◽  
...  

BackgroundWhile the role of genetic factors in self-report measures of emotion has been frequently studied, we know little about the degree to which genetic factors influence emotional facial expressions.MethodTwenty-eight pairs of monozygotic (MZ) and dizygotic (DZ) twins from the Minnesota Study of Twins Reared Apart were shown three emotion-inducing films and their facial responses recorded. These recordings were blindly scored by trained raters. Ranked correlations between twins were calculated controlling for age and sex.ResultsTwin pairs were significantly correlated for facial expressions of general positive emotions, happiness, surprise and anger, but not for general negative emotions, sadness, or disgust or average emotional intensity. MZ pairs (n=18) were more correlated than DZ pairs (n=10) for most but not all emotional expressions.ConclusionsSince these twin pairs had minimal contact with each other prior to testing, these results support significant genetic effects on the facial display of at least some human emotions in response to standardized stimuli. The small sample size resulted in estimated twin correlations with very wide confidence intervals.


2011 ◽  
Vol 12 (1) ◽  
pp. 77-77
Author(s):  
Sharpley Hsieh ◽  
Olivier Piguet ◽  
John R. Hodges

AbstractIntroduction: Frontotemporal dementia (FTD) is a progressive neurode-generative brain disease characterised clinically by abnormalities in behaviour, cognition and language. Two subgroups, behavioural-variant FTD (bvFTD) and semantic dementia (SD), also show impaired emotion recognition particularly for negative emotions. This deficit has been demonstrated using visual stimuli such as facial expressions. Whether recognition of emotions conveyed through other modalities — for example, music — is also impaired has not been investigated. Methods: Patients with bvFTD, SD and Alzheimer's disease (AD), as well as healthy age-matched controls, labeled tunes according to the emotion conveyed (happy, sad, peaceful or scary). In addition, each tune was also rated along two orthogonal emotional dimensions: valence (pleasant/unpleasant) and arousal (stimulating/relaxing). Participants also undertook a facial emotion recognition test and other cognitive tests. Integrity of basic music detection (tone, tempo) was also examined. Results: Patient groups were matched for disease severity. Overall, patients did not differ from controls with regard to basic music processing or for the recognition of facial expressions. Ratings of valence and arousal were similar across groups. In contrast, SD patients were selectively impaired at recognising music conveying negative emotions (sad and scary). Patients with bvFTD did not differ from controls. Conclusion: Recognition of emotions in music appears to be selectively affected in some FTD subgroups more than others, a disturbance of emotion detection which appears to be modality specific. This finding suggests dissociation in the neural networks necessary for the processing of emotions depending on modality.


Perception ◽  
1997 ◽  
Vol 26 (8) ◽  
pp. 1047-1058 ◽  
Author(s):  
Howard C Hughes ◽  
David M Aronchick ◽  
Michael D Nelson

It has previously been observed that low spatial frequencies (≤ 1.0 cycles deg−1) tend to dominate high spatial frequencies (≥ 5.0 cycles deg−1) in several types of visual-information-processing tasks. This earlier work employed reaction times as the primary performance measure and the present experiments address the possibility of low-frequency dominance by evaluating visually guided performance of a completely different response system: the control of slow-pursuit eye movements. Slow-pursuit gains (eye velocity/stimulus velocity) were obtained while observers attempted to track the motion of a sine-wave grating. The drifting gratings were presented on three types of background: a uniform background, a background consisting of a stationary grating, or a flickering background. Low-frequency dominance was evident over a wide range of velocities, in that a stationary high-frequency component produced little disruption in the pursuit of a drifting low spatial frequency, but a stationary low frequency interfered substantially with the tracking of a moving high spatial frequency. Pursuit was unaffected by temporal modulation of the background, suggesting that these effects are due to the spatial characteristics of the stationary grating. Similar asymmetries were observed with respect to the stability of fixation: active fixation was less stable in the presence of a drifting low frequency than in the presence of a drifting high frequency.


2016 ◽  
Vol 29 (8) ◽  
pp. 749-771 ◽  
Author(s):  
Min Hooi Yong ◽  
Ted Ruffman

Dogs respond to human emotional expressions. However, it is unknown whether dogs can match emotional faces to voices in an intermodal matching task or whether they show preferences for looking at certain emotional facial expressions over others, similar to human infants. We presented 52 domestic dogs and 24 seven-month-old human infants with two different human emotional facial expressions of the same gender simultaneously, while listening to a human voice expressing an emotion that matched one of them. Consistent with most matching studies, neither dogs nor infants looked longer at the matching emotional stimuli, yet dogs and humans demonstrated an identical pattern of looking less at sad faces when paired with happy or angry faces (irrespective of the vocal stimulus), with no preference for happyversusangry faces. Discussion focuses on why dogs and infants might have an aversion to sad faces, or alternatively, heightened interest in angry and happy faces.


2015 ◽  
Vol 45 (10) ◽  
pp. 2111-2122 ◽  
Author(s):  
W. Li ◽  
T. M. Lai ◽  
C. Bohon ◽  
S. K. Loo ◽  
D. McCurdy ◽  
...  

BackgroundAnorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – to test for abnormal activity associated with early visual signaling.MethodWe acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems.ResultsAN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces.ConclusionsResults provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.


2020 ◽  
Author(s):  
Sjoerd Stuit ◽  
Timo Kootstra ◽  
David Terburg ◽  
Carlijn van den Boomen ◽  
Maarten van der Smagt ◽  
...  

Abstract Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual features predict selection between two faces. Interestingly, the identified features serve as better predictors than the semantic label of the expressions. We therefore propose that our modelling approach can further specify which visual features drive the behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.


Author(s):  
Shozo Tobimatsu

There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways. The former has excellent spatial resolution with color selectivity, while the latter shows excellent temporal resolution with high contrast sensitivity. Visual stimuli should be tailored to answer specific clinical and/or research questions. This chapter examines the neural mechanisms of face perception using event-related potentials (ERPs). Face stimuli of different spatial frequencies were used to investigate how low-spatial-frequency (LSF) and high-spatial-frequency (HSF) components of the face contribute to the identification and recognition of the face and facial expressions. The P100 component in the occipital area (Oz), the N170 in the posterior temporal region (T5/T6) and late components peaking at 270-390 ms (T5/T6) were analyzed. LSF enhanced P100, while N170 was augmented by HSF irrespective of facial expressions. This suggested that LSF is important for global processing of facial expressions, whereas HSF handles featural processing. There were significant amplitude differences between positive and negative LSF facial expressions in the early time windows of 270-310 ms. Subsequently, the amplitudes among negative HSF facial expressions differed significantly in the later time windows of 330–390 ms. Discrimination between positive and negative facial expressions precedes discrimination among different negative expressions in a sequential manner based on parallel visual channels. Interestingly, patients with schizophrenia showed decreased spatial frequency sensitivities for face processing. Taken together, the spatially filtered face images are useful for exploring face perception and recognition.


Sign in / Sign up

Export Citation Format

Share Document