ERP Responses to Facial Affect and Temperament Types in Eysenckian and Strelauvian Theories

2012 ◽  
Vol 33 (4) ◽  
pp. 212-226 ◽  
Author(s):  
Małgorzata Fajkowska ◽  
Anna Zagórska ◽  
Jan Strelau ◽  
Piotr Jaśkowski

Eysenck’s PEN and Strelau’s RTT theories are considered interrelated on the level of traits and translatable on the level of four ancient temperament types. However, they refer to different ways of regulation stimulation, by content (emotional and social) and by the formal (energetic and temporal) characteristics of activity, respectively. Thus, by indexing behavioral and cortical patterns of response, it was predicted that PEN- and RTT-relevant pairs of temperaments would be associated with specific attentional mechanisms. One week after administration of the FCB-TI and EPQ-R, subjects (260) performed the Emotional Go/No Go task while a 32-channel EEG was being recorded. They were instructed to respond to threatening, sad, or friendly faces, respectively, but not to any other facial expression. A range of ERP components responsive to facial stimuli were investigated. According to behavioral and cortical patterns of response, it was shown that PEN- and RTT-related pairs of temperament types were connected with effective functioning of the anterior and posterior attentional system, respectively. On the behavioral level, significant differences in attentional processing of facial affect were registered in PEN sanguines versus RTT sanguines and PEN melancholics versus RTT melancholics, while on the cortical level significant differences were registered in PEN melancholics versus RTT melancholics and PEN phlegmatics versus RTT phlegmatics. Given these results, the theoretical relations between the PEN and RTT – with particular respect to cognitive and cortical mechanisms underlying temperament types – are discussed.

1986 ◽  
Vol 13 (2) ◽  
pp. 85-87 ◽  
Author(s):  
Blaine F. Peden ◽  
Gene D. Steinhauer

This article describes an exercise that teaches students about methodological issues concerned with making reliable observations of behavior. After learning Ekman's (1972) Facial Affect Scoring Technique from a microcomputer program simulating expressions of emotion, students recorded the facial expression, gender, and age of people in natural settings, computed interobserver agreement scores, and submitted a laboratory report. This exercise generated much discussion about research methods, transferred skills from the classroom to a research setting, and illustrated our view that the microcomputer is a new tool that supplements, but does not replace, existing instructional techniques.


1985 ◽  
Vol 56 (2) ◽  
pp. 653-654 ◽  
Author(s):  
M. K. Mandal ◽  
S. Palchoudhury

An estimate of verbosity derived from a 2-min. free response to photographs depicting facial affect was examined in a 2 (depressed-control) × 3 (happy-sad-fear) × 2 (red-blue) factorial design. 30 depressed patients uttered less significant comment than the 30 control subjects. The sad face triggered significantly larger vocabularies than other facial emotions while exposure in red or blue color produced a non-significant effect.


2007 ◽  
Vol 21 (2) ◽  
pp. 100-108 ◽  
Author(s):  
Michela Balconi ◽  
Claudio Lucchiari

Abstract. In this study we analyze whether facial expression recognition is marked by specific event-related potential (ERP) correlates and whether conscious and unconscious elaboration of emotional facial stimuli are qualitatively different processes. ERPs elicited by supraliminal and subliminal (10 ms) stimuli were recorded when subjects were viewing emotional facial expressions of four emotions or neutral stimuli. Two ERP effects (N2 and P3) were analyzed in terms of their peak amplitude and latency variations. An emotional specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). Unaware information processing proved to be quite similar to aware processing in terms of peak morphology but not of latency. A major result of this research was that unconscious stimulation produced a more delayed peak variation than conscious stimulation did. Also, a more posterior distribution of the ERP was found for N2 as a function of emotional content of the stimulus. On the contrary, cortical lateralization (right/left) was not correlated to conscious/unconscious stimulation. The functional significance of our results is underlined in terms of subliminal effect and emotion recognition.


2019 ◽  
Vol 121 (4) ◽  
pp. 1368-1380 ◽  
Author(s):  
Donatas Jonikaitis ◽  
Saurabh Dhawan ◽  
Heiner Deubel

Motor responses are fundamentally spatial in their function and neural organization. However, studies of inhibitory motor control, focused on global stopping of all actions, have ignored whether inhibitory control can be exercised selectively for specific actions. We used a new approach to elicit and measure motor inhibition by asking human participants to either look at (select) or avoid looking at (inhibit) a location in space. We found that instructing a location to be avoided resulted in an inhibitory bias specific to that location. When compared with the facilitatory bias observed in the Look task, it differed significantly in both its spatiotemporal dynamics and its modulation of attentional processing. While action selection was evident in oculomotor system and interacted with attentional processing, action inhibition was evident mainly in the oculomotor system. Our findings suggest that action inhibition is implemented by spatially specific mechanisms that are separate from action selection. NEW & NOTEWORTHY We show that cognitive control of saccadic responses evokes separable action selection and inhibition processes. Both action selection and inhibition are represented in the saccadic system, but only action selection interacts with the attentional system.


2021 ◽  
Author(s):  
CAYOL Zoé ◽  
Tatjana Nazir

The Facial Expression Intensity Test (FExIT) measures the level of perceived intensity ofemotional cues in a given facial expression. The test consists of a series of faces takenfrom the NimStim set (Tottenham et al., 2009) whose expressions vary from a neutralexpression to one of the six basic emotions, with ten levels of morphing. The FExIT isvalidated by means of an emotion-related ERP component (i.e., the early posteriornegativity, EPN), which shows a systematic modulation of its amplitude with the level ofexpression intensity. The participant’s task in this test is to identify the expressedemotion among 8 options (i.e. the six basic emotions, a “neutral” and a “I don't know”option). The task is not timed. The score of the FExIT is either the proportion of correctlyidentified emotions, or the proportion of the attribution of an emotion to the facialstimulus (i.e. the attribution of any emotion but not “neutral” or “I don’t know”). Giventhat the facial expression intensity varies continuously from low to high, the FExIT allowsthe determination and comparison of threshold levels for correct responses. The freelyaccessible set of the 700 facial stimuli for the test is divided into two equivalent face lists,which further allows for pretest/posttest experimental designs. The test takesapproximately 25 min to complete and is simple to administer. The FExIT is thus a usefulinstrument for testing different experimental settings and populations.


Salud Mental ◽  
2014 ◽  
Vol 37 (6) ◽  
pp. 455
Author(s):  
Iván Arango de Montis ◽  
Ana Fresán ◽  
Martin Brüne ◽  
Vida Ortega-Font ◽  
Javier Villanueva-Valle ◽  
...  

El reconocimiento facial de las emociones en profesionales de la salud mental puede estar influenciado por el estado psicológico y las experiencias de apego. El objetivo del presente estudio fue examinar la asociación de los síntomas psicológicos y los estilos de apego en relación con la capacidad de residentes de psiquiatría para identificar correctamente la expresión facial de las emociones a lo largo de tres de los cuatro años de su formación como psiquiatras. La muestra estuvo compuesta por 16 residentes de psiquiatría de un centro especializado en salud mental. Con el objeto de evaluar los síntomas psiquiátricos, se aplicaron el instrumento conocido como SCL-90 y el cuestionario de estilos de apego ASQ. Para el reconocimiento de las emociones, se aplicó el instrumento conocido como POFA (Picture of Facial Affect). Durante la residencia en psiquiatría, la gravedad de los síntomas psiquiátricos se mantuvo en un mínimo en todos los participantes. El miedo fue la emoción menos reconocida al inicio y durante el tercer año de residencia, mientras que la expresión neutra fue la mejor reconocida en ambos momentos. Se observaron cambios significativos a lo largo del tiempo en el reconocimiento de la tristeza y el asco. No se encontraron asociaciones significativas entre el tiempo y los síntomas de ansiedad y depresión y los estilos de apego.


Author(s):  
Maja Pantic

The human face is involved in an impressive variety of different activities. It houses the majority of our sensory apparatus: eyes, ears, mouth, and nose, allowing the bearer to see, hear, taste, and smell. Apart from these biological functions, the human face provides a number of signals essential for interpersonal communication in our social life. The face houses the speech production apparatus and is used to identify other members of the species, to regulate the conversation by gazing or nodding, and to interpret what has been said by lip reading. It is our direct and naturally preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression (Lewis & Haviland-Jones, 2000). Personality, attractiveness, age, and gender can also be seen from someone’s face. Thus the face is a multisignal sender/receiver capable of tremendous flexibility and specificity. In general, the face conveys information via four kinds of signals listed in Table 1. Automating the analysis of facial signals, especially rapid facial signals, would be highly beneficial for fields as diverse as security, behavioral science, medicine, communication, and education. In security contexts, facial expressions play a crucial role in establishing or detracting from credibility. In medicine, facial expressions are the direct means to identify when specific mental processes are occurring. In education, pupils’ facial expressions inform the teacher of the need to adjust the instructional message. As far as natural user interfaces between humans and computers (PCs/robots/machines) are concerned, facial expressions provide a way to communicate basic information about needs and demands to the machine. In fact, automatic analysis of rapid facial signals seem to have a natural place in various vision subsystems and vision-based interfaces (face-for-interface tools), including automated tools for gaze and focus of attention tracking, lip reading, bimodal speech processing, face/visual speech synthesis, face-based command issuing, and facial affect processing. Where the user is looking (i.e., gaze tracking) can be effectively used to free computer users from the classic keyboard and mouse. Also, certain facial signals (e.g., a wink) can be associated with certain commands (e.g., a mouse click) offering an alternative to traditional keyboard and mouse commands. The human capability to “hear” in noisy environments by means of lip reading is the basis for bimodal (audiovisual) speech processing that can lead to the realization of robust speech-driven interfaces. To make a believable “talking head” (avatar) representing a real person, tracking the person’s facial signals and making the avatar mimic those using synthesized speech and facial expressions is compulsory. The human ability to read emotions from someone’s facial expressions is the basis of facial affect processing that can lead to expanding user interfaces with emotional communication and, in turn, to obtaining a more flexible, adaptable, and natural affective interfaces between humans and machines. More specifically, the information about when the existing interaction/processing should be adapted, the importance of such an adaptation, and how the interaction/ reasoning should be adapted, involves information about how the user feels (e.g., confused, irritated, tired, interested). Examples of affect-sensitive user interfaces are still rare, unfortunately, and include the systems of Lisetti and Nasoz (2002), Maat and Pantic (2006), and Kapoor, Burleson, and Picard (2007). It is this wide range of principle driving applications that has lent a special impetus to the research problem of automatic facial expression analysis and produced a surge of interest in this research topic.


2016 ◽  
Vol 9 (3) ◽  
pp. 280-292 ◽  
Author(s):  
Eva G. Krumhuber ◽  
Lina Skora ◽  
Dennis Küster ◽  
Linyun Fou

Temporal dynamics have been increasingly recognized as an important component of facial expressions. With the need for appropriate stimuli in research and application, a range of databases of dynamic facial stimuli has been developed. The present article reviews the existing corpora and describes the key dimensions and properties of the available sets. This includes a discussion of conceptual features in terms of thematic issues in dataset construction as well as practical features which are of applied interest to stimulus usage. To identify the most influential sets, we further examine their citation rates and usage frequencies in existing studies. General limitations and implications for emotion research are noted and future directions for stimulus generation are outlined.


Sign in / Sign up

Export Citation Format

Share Document