Generating Emotional Sentences Through Sentiment and Emotion Word Masking-based BERT and GPT Pipeline Method

2021 ◽  
Vol 19 (9) ◽  
pp. 29-40
Author(s):  
Won-Min Lee ◽  
Byung-Won On
Keyword(s):  
2008 ◽  
Author(s):  
Jeanette Altarriba ◽  
Dana M. Basnight-Brown
Keyword(s):  

2021 ◽  
Vol 11 (5) ◽  
pp. 553
Author(s):  
Chenggang Wu ◽  
Juan Zhang ◽  
Zhen Yuan

In order to explore the affective priming effect of emotion-label words and emotion-laden words, the current study used unmasked (Experiment 1) and masked (Experiment 2) priming paradigm by including emotion-label words (e.g., sadness, anger) and emotion-laden words (e.g., death, gift) as primes and examined how the two kinds of words acted upon the processing of the target words (all emotion-laden words). Participants were instructed to decide the valence of target words, and their electroencephalogram was recorded at the same time. The behavioral and event-related potential (ERP) results showed that positive words produced a priming effect whereas negative words inhibited target word processing (Experiment 1). In Experiment 2, the inhibition effect of negative emotion-label words on emotion word recognition was found in both behavioral and ERP results, suggesting that modulation of emotion word type on emotion word processing could be observed even in the masked priming paradigm. The two experiments further supported the necessity of defining emotion words under an emotion word type perspective. The implications of the findings are proffered. Specifically, a clear understanding of emotion-label words and emotion-laden words can improve the effectiveness of emotional communications in clinical settings. Theoretically, the emotion word type perspective awaits further explorations and is still at its infancy.


2015 ◽  
Vol 6 ◽  
Author(s):  
Sara C. Sereno ◽  
Graham G. Scott ◽  
Bo Yao ◽  
Elske J. Thaden ◽  
Patrick J. O'Donnell
Keyword(s):  

2013 ◽  
Vol 23 (1) ◽  
pp. 6-14
Author(s):  
Corrin G. Richels ◽  
Rogge Jessica

Purpose: Deficits in the ability to use emotion vocabulary may result in difficulties for adolescents who stutter (AWS) and may contribute to disfluencies and stuttering. In this project, we aimed to describe the emotion words used during conversational speech by AWS. Methods: Participants were 26 AWS between the ages of 12 years, 5 months and 15 years, 11 months-old (n=4 females, n=22 males). We drew personal narrative samples from the UCLASS database. We used Linguistic Inquiry and Word Count (LIWC) software to analyze data samples for numbers of emotion words. Results: Results indicated that the AWS produced significantly higher numbers of emotion words with a positive valence. AWS tended to use the same few positive emotion words to the near exclusion of words with negative emotion valence. Conclusion: A lack of diversity in emotion vocabulary may make it difficult for AWS to engage in meaningful discourse about negative aspects of being a person who stutters


2019 ◽  
Author(s):  
Rosabel Yu Ling Tay ◽  
Bee Chin Ng

2021 ◽  
pp. 026540752110511
Author(s):  
Stephanie J Wilson ◽  
Lisa M Jaremka ◽  
Christopher P Fagundes ◽  
Rebecca Andridge ◽  
Janice K Kiecolt-Glaser

According to extensive evidence, we-talk—couples’ use of first-person, plural pronouns—predicts better relationship quality and well-being. However, prior work has not distinguished we-talk by its context, which varies widely across studies. Also, little is known about we-talk’s consistency over time. To assess the stability and correlates of we-talk in private versus conversational contexts, 43 married couples’ language was captured during a marital problem discussion and in each partner’s privately recorded thoughts before and after conflict. Participants were asked to describe any current thoughts and feelings in the baseline thought-listing and to focus on their reaction to the conflict itself in the post-conflict sample. Couples repeated this protocol at a second study visit, approximately 1 month later. We-talk in baseline and post-conflict thought-listings was largely uncorrelated with we-talk during conflict discussions, but each form of we-talk was consistent between the two study visits. Their correlates were also distinct: more we-talk during conflict was associated with less hostility during conflict, whereas more baseline we-talk predicted greater closeness in both partners, as well as lower vocally encoded arousal and more positive emotion word use in partners after conflict. These novel data reveal that we-talk can be meaningfully distinguished by its context—whether language is sampled from private thoughts or marital discussions, and whether the study procedure requests relationship talk. Taken together, these variants of we-talk may have unique implications for relationship function and well-being.


2021 ◽  
pp. 1-21
Author(s):  
Michael Vesker ◽  
Daniela Bahn ◽  
Christina Kauschke ◽  
Gudrun Schwarzer

Abstract Social interactions often require the simultaneous processing of emotions from facial expressions and speech. However, the development of the gaze behavior used for emotion recognition, and the effects of speech perception on the visual encoding of facial expressions is less understood. We therefore conducted a word-primed face categorization experiment, where participants from multiple age groups (six-year-olds, 12-year-olds, and adults) categorized target facial expressions as positive or negative after priming with valence-congruent or -incongruent auditory emotion words, or no words at all. We recorded our participants’ gaze behavior during this task using an eye-tracker, and analyzed the data with respect to the fixation time toward the eyes and mouth regions of faces, as well as the time until participants made the first fixation within those regions (time to first fixation, TTFF). We found that the six-year-olds showed significantly higher accuracy in categorizing congruently primed faces compared to the other conditions. The six-year-olds also showed faster response times, shorter total fixation durations, and faster TTFF measures in all primed trials, regardless of congruency, as compared to unprimed trials. We also found that while adults looked first, and longer, at the eyes as compared to the mouth regions of target faces, children did not exhibit this gaze behavior. Our results thus indicate that young children are more sensitive than adults or older children to auditory emotion word primes during the perception of emotional faces, and that the distribution of gaze across the regions of the face changes significantly from childhood to adulthood.


2020 ◽  
Vol 11 ◽  
Author(s):  
Kimihiro Nakamura ◽  
Tomoe Inomata ◽  
Akira Uno
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document