scholarly journals Responses of Single Neurons in Monkey Amygdala to Facial and Vocal Emotions

2007 ◽  
Vol 97 (2) ◽  
pp. 1379-1387 ◽  
Author(s):  
Koji Kuraoka ◽  
Katsuki Nakamura

The face and voice can independently convey the same information about emotion. When we see an angry face or hear an angry voice, we can perceive a person's anger. These two different sensory cues are interchangeable in this sense. However, it is still unclear whether the same group of neurons process signals for facial and vocal emotions. We recorded neuronal activity in the amygdala of monkeys while watching nine video clips of species-specific emotional expressions: three monkeys showing three emotional expressions (aggressive threat, scream, and coo). Of the 227 amygdala neurons tested, 116 neurons (51%) responded to at least one of the emotional expressions. These “monkey-responsive” neurons—that is, neurons that responded to monkey-specific emotional expression—preferred the scream to other emotional expressions irrespective of identity. To determine the element crucial to neuronal responses, the activity of 79 monkey-responsive neurons was recorded while a facial or vocal element of a stimulus was presented alone. Although most neurons (61/79, 77%) strongly responded to the visual but not to the auditory element, about one fifth (16/79, 20%) maintained a good response when either the facial or vocal element was presented. Moreover, these neurons maintained their stimulus-preference profiles under facial and vocal conditions. These neurons were found in the central nucleus of the amygdala, the nucleus that receives inputs from other amygdala nuclei and in turn sends outputs to other emotion-related brain areas. These supramodal responses to emotion would be of use in generating appropriate responses to information regarding either facial or vocal emotion.

2018 ◽  
Author(s):  
Andras N. Zsido ◽  
Virag Ihasz ◽  
Annekathrin Schacht ◽  
Nikolett Arato ◽  
Orsolya Inhof ◽  
...  

Previous studies investigating the advantage of emotional expressions in visual processing in preschool children only used adult faces. However, children perceive facial expression of emotions differently when displayed on adults’ compared to children’s faces. In the present study, pre-schoolers (N=43, Mean age=5.65) and adults (N=37, Mean age=21.8) had to find a target face displaying an emotional expression among eight neutral faces. Gender of the faces (boy and girl) were also manipulated. Happy faces were found the fastest across both samples. Children detected the angry face faster than the fearful one, while adults vice versa. However, an interaction in the adult sample suggests that this is only true for girls’ faces, while the difference was nonsignificant for boys’ faces. In both samples, the detection was faster with boys’ faces compare to girls’ for all emotions. It is suggested that the happy face could have an advantage in visual processing due to its importance in social situations.


Author(s):  
Trinh Le Ba Khanh ◽  
Soo-Hyung Kim ◽  
Gueesang Lee ◽  
Hyung-Jeong Yang ◽  
Eu-Tteum Baek

Abstract Emotion recognition is one of the hottest fields in affective computing research. Recognizing emotions is an important task for facilitating communication between machines and humans. However, it is a very challenging task based on a lack of ethnically diverse databases. In particular, emotional expressions tend to be very dissimilar between Western and Eastern people. Therefore, diverse emotion databases are required for studying emotional expression. However, majority of the well-known emotion databases focus on Western people, which exhibit different characteristics compared to Eastern people. In this study, we constructed a novel emotion dataset containing more than 1200 video clips collected from Korean movies, called Korean Video Dataset for Emotion Recognition in the Wild (KVDERW). Which are similar to real-world conditions, with the goal of studying the emotions of Eastern people, particularly Korean people. Additionally, we developed a semi-automatic video emotion labelling tool that could be used to generate video clips and annotate the emotions in clips.


2018 ◽  
Vol 29 (9) ◽  
pp. 3590-3605 ◽  
Author(s):  
Jodie Davies-Thompson ◽  
Giulia V Elli ◽  
Mohamed Rezk ◽  
Stefania Benetti ◽  
Markus van Ackeren ◽  
...  

Abstract The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face–voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


2017 ◽  
Author(s):  
Jodie Davies-Thompson ◽  
Giulia V. Elli ◽  
Mohamed Rezk ◽  
Stefania Benetti ◽  
Markus van Ackeren ◽  
...  

ABSTRACTThe brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face-and voice-selective regions of interest extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic Causal Modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area (FFA), and voice-selective temporal voice area (TVA), with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


Author(s):  
Eleonora Cannoni ◽  
Giuliana Pinto ◽  
Anna Silvia Bombi

AbstractThis study was aimed at verifying if children introduce emotional expressions in their drawings of human faces, and if a preferential expression exists; we also wanted to verify if children’s pictorial choices change with increasing age. To this end we examined the human figure drawings made by 160 boys and 160 girls, equally divided in 4 age groups: 6–7; 8–9; 10–11; 12–13 years; mean ages (SD in parentheses) were: 83,30 (6,54); 106,14 (7,16) 130,49 (8,26); 155,40 (6,66). Drawings were collected with the Draw-a-Man test instructions, i.e. without mentioning an emotional characterization. In the light of data from previous studies of emotion drawing on request, and the literature about preferred emotional expressions, we expected that an emotion would be portrayed even by the younger participants, and that the preferred emotion would be happiness. We also expected that with the improving ability to keep into account both mouth and eyes appearance, other expressions would be found besides the smiling face. Data were submitted to non-parametric tests to compare the frequencies of expressions (absolute and by age) and the frequencies of visual cues (absolute and by age and expressions). The results confirmed that only a small number of faces were expressionless, and that the most frequent emotion was happiness. However, with increasing age this representation gave way to a variety of basic emotions (sadness, fear, anger, surprise), whose representation may depend from the ability to modify the shapes of both eyes and mouth and changing communicative aims of the child.


Development ◽  
1993 ◽  
Vol 119 (1) ◽  
pp. 41-48 ◽  
Author(s):  
J.M. Brown ◽  
S.E. Wedden ◽  
G.H. Millburn ◽  
L.G. Robson ◽  
R.E. Hill ◽  
...  

Mouse mesenchyme was grafted into chick embryos to investigate the control of mesenchymal expression of Msx-1 in the developing limb and face. In situ hybridization, using species-specific probes, allows a comparison between Msx-1 expression in the graft and the host tissue. The results show that Msx-1 expression in both limb-to-limb and face-to-face grafts corresponds closely with the level of Msx-1 expression in the surrounding chick mesenchyme. Cells in grafts that end up within the host domain of Msx-1 express the gene irrespective of whether they were from normally expressing, or non-expressing, regions. Therefore Msx-1 expression in both the developing limb and the developing face appears to be position-dependent. Mesenchyme from each of the three major facial primordia behaved in the same way when grafted to the chick maxillary primordium. Reciprocal grafts between face and limb gave a different result: Msx-1 expression was activated when facial mesenchyme was grafted to the limb but not when limb mesenchyme was grafted to the face. This suggests either that there are quantitative or qualitative differences in two local signalling systems or that additional factors determine the responsiveness of the mesenchyme cells.


2016 ◽  
Vol 12 (1) ◽  
pp. 20150883 ◽  
Author(s):  
Natalia Albuquerque ◽  
Kun Guo ◽  
Anna Wilkinson ◽  
Carine Savalli ◽  
Emma Otta ◽  
...  

The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes place within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others' emotions facilitates the information processing and indicates high-level cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either human or dog faces with different emotional valences (happy/playful versus angry/aggressive) paired with a single vocalization from the same individual with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known only in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate between positive and negative emotions from both humans and dogs.


2017 ◽  
Vol 4 (3) ◽  
pp. 160607 ◽  
Author(s):  
Gustav Nilsonne ◽  
Sandra Tamm ◽  
Armita Golkar ◽  
Karolina Sörman ◽  
Katarina Howner ◽  
...  

Emotional mimicry and empathy are mechanisms underlying social interaction. Benzodiazepines have been proposed to inhibit empathy and promote antisocial behaviour. First, we aimed to investigate the effects of oxazepam on emotional mimicry and empathy for pain, and second, we aimed to investigate the association of personality traits to emotional mimicry and empathy. Participants ( n =76) were randomized to 25 mg oxazepam or placebo. Emotional mimicry was examined using video clips with emotional expressions. Empathy was investigated by pain stimulating the participant and a confederate. We recorded self-rated experience, activity in major zygomatic and superciliary corrugator muscles, skin conductance, and heart rate. In the mimicry experiment, oxazepam inhibited corrugator activity. In the empathy experiment, oxazepam caused increased self-rated unpleasantness and skin conductance. However, oxazepam specifically inhibited neither emotional mimicry nor empathy for pain. Responses in both experiments were associated with self-rated empathic, psychopathic and alexithymic traits. The present results do not support a specific effect of 25 mg oxazepam on emotional mimicry or empathy.


2017 ◽  
Vol 11 (1) ◽  
pp. 27-38 ◽  
Author(s):  
Caruana Fausto

A common view in affective neuroscience considers emotions as a multifaceted phenomenon constituted by independent affective and motor components. Such dualistic connotation, obtained by rephrasing the classic Darwin and James’s theories of emotion, leads to the assumption that emotional expression is controlled by motor centers in the anterior cingulate, frontal operculum, and supplementary motor area, whereas emotional experience depends on interoceptive centers in the insula. Recent stimulation studies provide a different perspective. I will outline two sets of findings. First, affective experiences can be elicited also following the stimulation of motor centers. Second, emotional expressions can be elicited by stimulating interoceptive regions. Echoing the original pragmatist theories of emotion, I will make a case for the notion that emotional experience emerges from the integration of sensory and motor signals, encoded in the same functional network.


Sign in / Sign up

Export Citation Format

Share Document