emotional information
Recently Published Documents


TOTAL DOCUMENTS

689
(FIVE YEARS 216)

H-INDEX

60
(FIVE YEARS 5)

Author(s):  
Mei Li ◽  
Jiajun Zhang ◽  
Xiang Lu ◽  
Chengqing Zong

Emotional dialogue generation aims to generate appropriate responses that are content relevant with the query and emotion consistent with the given emotion tag. Previous work mainly focuses on incorporating emotion information into the sequence to sequence or conditional variational auto-encoder (CVAE) models, and they usually utilize the given emotion tag as a conditional feature to influence the response generation process. However, emotion tag as a feature cannot well guarantee the emotion consistency between the response and the given emotion tag. In this article, we propose a novel Dual-View CVAE model to explicitly model the content relevance and emotion consistency jointly. These two views gather the emotional information and the content-relevant information from the latent distribution of responses, respectively. We jointly model the dual-view via VAE to get richer and complementary information. Extensive experiments on both English and Chinese emotion dialogue datasets demonstrate the effectiveness of our proposed Dual-View CVAE model, which significantly outperforms the strong baseline models in both aspects of content relevance and emotion consistency.


Languages ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 12
Author(s):  
Peiyao Chen ◽  
Ashley Chung-Fat-Yim ◽  
Viorica Marian

Emotion perception frequently involves the integration of visual and auditory information. During multisensory emotion perception, the attention devoted to each modality can be measured by calculating the difference between trials in which the facial expression and speech input exhibit the same emotion (congruent) and trials in which the facial expression and speech input exhibit different emotions (incongruent) to determine the modality that has the strongest influence. Previous cross-cultural studies have found that individuals from Western cultures are more distracted by information in the visual modality (i.e., visual interference), whereas individuals from Eastern cultures are more distracted by information in the auditory modality (i.e., auditory interference). These results suggest that culture shapes modality interference in multisensory emotion perception. It is unclear, however, how emotion perception is influenced by cultural immersion and exposure due to migration to a new country with distinct social norms. In the present study, we investigated how the amount of daily exposure to a new culture and the length of immersion impact multisensory emotion perception in Chinese-English bilinguals who moved from China to the United States. In an emotion recognition task, participants viewed facial expressions and heard emotional but meaningless speech either from their previous Eastern culture (i.e., Asian face-Mandarin speech) or from their new Western culture (i.e., Caucasian face-English speech) and were asked to identify the emotion from either the face or voice, while ignoring the other modality. Analyses of daily cultural exposure revealed that bilinguals with low daily exposure to the U.S. culture experienced greater interference from the auditory modality, whereas bilinguals with high daily exposure to the U.S. culture experienced greater interference from the visual modality. These results demonstrate that everyday exposure to new cultural norms increases the likelihood of showing a modality interference pattern that is more common in the new culture. Analyses of immersion duration revealed that bilinguals who spent more time in the United States were equally distracted by faces and voices, whereas bilinguals who spent less time in the United States experienced greater visual interference when evaluating emotional information from the West, possibly due to over-compensation when evaluating emotional information from the less familiar culture. These findings suggest that the amount of daily exposure to a new culture and length of cultural immersion influence multisensory emotion perception in bilingual immigrants. While increased daily exposure to the new culture aids with the adaptation to new cultural norms, increased length of cultural immersion leads to similar patterns in modality interference between the old and new cultures. We conclude that cultural experience shapes the way we perceive and evaluate the emotions of others.


2022 ◽  
Vol 12 ◽  
Author(s):  
Marta F. Nudelman ◽  
Liana C. L. Portugal ◽  
Izabela Mocaiber ◽  
Isabel A. David ◽  
Beatriz S. Rodolpho ◽  
...  

Background: Evidence indicates that the processing of facial stimuli may be influenced by incidental factors, and these influences are particularly powerful when facial expressions are ambiguous, such as neutral faces. However, limited research investigated whether emotional contextual information presented in a preceding and unrelated experiment could be pervasively carried over to another experiment to modulate neutral face processing.Objective: The present study aims to investigate whether an emotional text presented in a first experiment could generate negative emotion toward neutral faces in a second experiment unrelated to the previous experiment.Methods: Ninety-nine students (all women) were randomly assigned to read and evaluate a negative text (negative context) or a neutral text (neutral text) in the first experiment. In the subsequent second experiment, the participants performed the following two tasks: (1) an attentional task in which neutral faces were presented as distractors and (2) a task involving the emotional judgment of neutral faces.Results: The results show that compared to the neutral context, in the negative context, the participants rated more faces as negative. No significant result was found in the attentional task.Conclusion: Our study demonstrates that incidental emotional information available in a previous experiment can increase participants’ propensity to interpret neutral faces as more negative when emotional information is directly evaluated. Therefore, the present study adds important evidence to the literature suggesting that our behavior and actions are modulated by previous information in an incidental or low perceived way similar to what occurs in everyday life, thereby modulating our judgments and emotions.


Author(s):  
I.I. Kushakova

The article is devoted to the linguocultural analysis of the idiom madeleine de Proust in modern French. The analysis is based on lexicographic data, component analysis method and text semantic analysis. Using the method of linguocultural decoding they find out emotional and sensual, ethical, aesthetic informations; archetypal, mythological, religious, philosophical and scientific informations, which are the deep foundations of the meaning of a phraseological unit. In the unit madeleine de Proust , the basic types of information are religious and philosophical. The first is associated with the polysemantic inner form of the word Madeleine. It is a nominative name - it means the name of a flour product and a proper name - Madeleine was the name of the girl who invented this culinary dish; this word is associated with the images of Magdalen-sinner and St. Mary-Magdalene, which is reflected in the phraseological system of the French language; with proper name Saint-Jacques, which is used to describe the subject of the material world, and accompanies the writer’s life, the hero in the novel. The philosophical type of information is associated with Proust’s reflections, signs, philosophy, with the concepts of the voluntary and involuntary memory. And all these were pushed by the sensual-emotional information. The polysemous word madeleine in the semantic space of the literary text thanks to the talent of the writer, his emotional and sensual experiences, his stream of consciousness, expanded its meaning and became one of the components of the new language unit madeleine de Proust .


2021 ◽  
pp. 108705472110636
Author(s):  
Cassandra C. Schuthof ◽  
Indira Tendolkar ◽  
Maria Annemiek Bergman ◽  
Margit Klok ◽  
Rose M. Collard ◽  
...  

Objectives: Depression and ADHD often co-occur and are both characterized by altered attentional processing. Differences and overlap in the profile of attention to emotional information may help explain the co-occurence. We examined negative attention bias in ADHD as neurocognitive marker for comorbid depression. Methods: Patients with depression ( n = 63), ADHD ( n = 43), ADHD and depression ( n = 25), and non-psychiatric controls ( n = 68) were compared on attention allocation toward emotional faces. The following eye-tracking indices were used: gaze duration, number of revisits, and location and duration of first fixation. Results: Controls revisited the happy faces more than the other facial expressions. Both the depression and the comorbid group showed significantly less revisits of the happy faces compared to the ADHD and the control group. Interestingly, after controlling for depressive symptoms, the groups no longer differed on the number of revisits. Conclusion: ADHD patients show a relative positive attention bias, while negative attention bias in ADHD likely indicates (sub)clinical comorbid depression.


Author(s):  
Vanessa LoBue ◽  
Marissa Ogren

Emotion understanding facilitates the development of healthy social interactions. To develop emotion knowledge, infants and young children must learn to make inferences about people's dynamically changing facial and vocal expressions in the context of their everyday lives. Given that emotional information varies so widely, the emotional input that children receive might particularly shape their emotion understanding over time. This review explores how variation in children's received emotional input shapes their emotion understanding and their emotional behavior over the course of development. Variation in emotional input from caregivers shapes individual differences in infants’ emotion perception and understanding, as well as older children's emotional behavior. Finally, this work can inform policy and focus interventions designed to help infants and young children with social-emotional development.


2021 ◽  
Vol 12 ◽  
Author(s):  
Elisa Boelens ◽  
Marie-Lotte Van Beveren ◽  
Rudi De Raedt ◽  
Sandra Verbeken ◽  
Caroline Braet

Attentional deployment is currently considered as one of the most central mechanisms in emotion regulation (ER) as it is assumed to be a crucial first step in the selection of emotional information. According to the broaden-and-build theory, positive emotions are associated with attentional broadening and negative emotions with attentional narrowing toward emotional information. Given that ER strategies relying on attentional deployment (i.e., rumination, cognitive reappraisal and distraction) have the possibility to influence positive and negative emotions by (re)directing one’s attention, there could be an association with one’s attentional scope. The current study investigated the association between the general (trait) use of three specific ER strategies and visual attentional breadth for positive, negative, and neutral information in a selected sample of 56 adolescents (M = 12.54, SD = 1.72; 49% girls) at risk for developing psychopathology. First, participants self-reported on their overall use of different ER strategies. Next, the previously validated Attentional Breadth Task (ABT) was used to measure visual attention breadth toward emotional information. No evidence was found for the relationship between 2 specific ER strategies (i.e., cognitive reappraisal and rumination) and visual attentional breadth for neutral, positive and negative emotional information. Surprisingly, “distraction” was associated with visual attentional narrowing, which was unrelated to the valence of the emotion. These unexpected results indicate the multifaceted relationship between trait ER, distraction specifically, and visual attentional breadth for emotional information. Future research, especially in younger age groups, could further elaborate on this research domain.


2021 ◽  
Vol 7 ◽  
pp. e786
Author(s):  
Vaibhav Bhat ◽  
Anita Yadav ◽  
Sonal Yadav ◽  
Dhivya Chandrasekaran ◽  
Vijay Mago

Emotion recognition in conversations is an important step in various virtual chatbots which require opinion-based feedback, like in social media threads, online support, and many more applications. Current emotion recognition in conversations models face issues like: (a) loss of contextual information in between two dialogues of a conversation, (b) failure to give appropriate importance to significant tokens in each utterance, (c) inability to pass on the emotional information from previous utterances. The proposed model of Advanced Contextual Feature Extraction (AdCOFE) addresses these issues by performing unique feature extraction using knowledge graphs, sentiment lexicons and phrases of natural language at all levels (word and position embedding) of the utterances. Experiments on emotion recognition in conversations datasets show that AdCOFE is beneficial in capturing emotions in conversations.


2021 ◽  
Author(s):  
◽  
Kelly Hewitt

<p>Emotional stimuli capture our attention. The preferential processing of emotional information is an adaptive mechanism that when relevant to our goal highlights potentially important aspects in the environment. However, when emotional information is task-irrelevant, their presence in the environment can trigger involuntary shifts in attention that cause detriments to performance. One challenge to investigating emotional distraction in the lab is how to objectively investigate the allocation of attention between different elements on the same stimulus display (e.g. between the task and the distractors). One neural measure that overcomes this issue is the Steady-State-Visual-Evoked-Potential (SSVEP). An SSVEP is the neural response of the visual cortex to a flickering stimulus and can be used as a measure of attentional resource allocation (Norcia, Appelbaum, Ales, Cottereau, & Rossion, 2015). In the past, emotional distraction has been studied using spatially separated tasks and distractors. The current thesis presents two experiments using SSVEPs to investigate emotional distraction in a superimposed design. Experiment 1 aimed to conceptually replicate Hindi Attar and colleagues (2010) who developed an SSVEP emotional distraction paradigm to examine attentional resource allocation between background task-irrelevant emotional distractors and a foreground dot-motion task. Participants viewed a stimulus display of moving, flickering dots, while positive or neutrally valanced distractors (or unidentifiable scrambles) were presented in the background of the task. SSVEPs were reduced in the presence of positive intact compared to neutral intact distractors suggesting that the presentation of task-irrelevant emotional stimuli in the same spatial location as a foreground task initiates an involuntary shift of attention away from the task. Unexpectedly, in both Experiments 1 and 2 valence differences were found in SSVEPs between positive and neutral scrambled images; this suggests that there are some perceptual differences between the stimulus sets (e.g. colour) contributing to the drop in SSVEP found for positive intact images. Importantly, in the SSVEP analysis significant valence x image type interactions were found, demonstrating that the drop for positive images was stronger for intact than scrambled image conditions, suggesting that a significant amount of the drop in SSVEP was driven by a difference in valence between the intact distractors. Behavioural results also suggest evidence for emotional distraction through reduced hit rate in the presence of positive intact images compared to neutral intact images in Experiment 1, and reduced detection sensitivity and response criterion for positive intact images in Experiment 2. Overall, the current thesis demonstrates support for the hypothesis that emotional information is more distracting than neutral information and provides a valuable starting point for the examination of emotion attention interactions when the task and distractors share the same location. Future studies could use SSVEPs to examine neural processing differences between emotional and neutral scrambled images.</p>


Sign in / Sign up

Export Citation Format

Share Document