textual cues
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 18)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 1 (23) ◽  
pp. 7-20
Author(s):  
Joanna Bobin

The paper is an attempt at demonstrating how the language used by fictional dramatic characters contributes to their characterization, that is, how the readers (audiences) perceive them based on inferences drawn from a variety of textual cues. These cues include explicit selfand other-presentation as well as implicit hints retrieved from conversation structure, aspects of turn-taking or features of the language used by the character. In this paper, Blanche DuBois and Stanley Kowalski from Tennessee Williams’ play The Streetcar Named Desire are analyzed and characterized as being polar opposites.


2021 ◽  
pp. 1-5
Author(s):  
Julene Abad Del Vecchio
Keyword(s):  

ABSTRACT This article draws attention to the presence of a previously unnoticed transliterated telestich (SOMATA) in the transformation of stones into bodies in the episode of Deucalion and Pyrrha in Ovid's Metamorphoses (1.406–11). Detection of the Greek intext, which befits the episode's amplified bilingual atmosphere, is encouraged by a number of textual cues. The article also suggests a ludic connection to Aratus’ Phaenomena.


2021 ◽  
Author(s):  
Tao Zhang ◽  
Zhenhua Tan

With the development of social media and human-computer interaction, video has become one of the most common data formats. As a research hotspot, emotion recognition system is essential to serve people by perceiving people’s emotional state in videos. In recent years, a large number of studies focus on tackling the issue of emotion recognition based on three most common modalities in videos, that is, face, speech and text. The focus of this paper is to sort out the relevant studies of emotion recognition using facial, speech and textual cues due to the lack of review papers concentrating on the three modalities. On the other hand, because of the effective leverage of deep learning techniques to learn latent representation for emotion recognition, this paper focuses on the emotion recognition method based on deep learning techniques. In this paper, we firstly introduce widely accepted emotion models for the purpose of interpreting the definition of emotion. Then we introduce the state-of-the-art for emotion recognition based on unimodality including facial expression recognition, speech emotion recognition and textual emotion recognition. For multimodal emotion recognition, we summarize the feature-level and decision-level fusion methods in detail. In addition, the description of relevant benchmark datasets, the definition of metrics and the performance of the state-of-the-art in recent years are also outlined for the convenience of readers to find out the current research progress. Ultimately, we explore some potential research challenges and opportunities to give researchers reference for the enrichment of emotion recognition-related researches.


2021 ◽  
Author(s):  
Tao Zhang ◽  
Zhenhua Tan

With the development of social media and human-computer interaction, video has become one of the most common data formats. As a research hotspot, emotion recognition system is essential to serve people by perceiving people’s emotional state in videos. In recent years, a large number of studies focus on tackling the issue of emotion recognition based on three most common modalities in videos, that is, face, speech and text. The focus of this paper is to sort out the relevant studies of emotion recognition using facial, speech and textual cues due to the lack of review papers concentrating on the three modalities. On the other hand, because of the effective leverage of deep learning techniques to learn latent representation for emotion recognition, this paper focuses on the emotion recognition method based on deep learning techniques. In this paper, we firstly introduce widely accepted emotion models for the purpose of interpreting the definition of emotion. Then we introduce the state-of-the-art for emotion recognition based on unimodality including facial expression recognition, speech emotion recognition and textual emotion recognition. For multimodal emotion recognition, we summarize the feature-level and decision-level fusion methods in detail. In addition, the description of relevant benchmark datasets, the definition of metrics and the performance of the state-of-the-art in recent years are also outlined for the convenience of readers to find out the current research progress. Ultimately, we explore some potential research challenges and opportunities to give researchers reference for the enrichment of emotion recognition-related researches.


2021 ◽  
pp. 014544552098255
Author(s):  
Shereen Cohen ◽  
Robert Koegel ◽  
Lynn Kern Koegel ◽  
Erin Engstrom ◽  
Kurtis Young ◽  
...  

Many individuals with Autism Spectrum Disorder (ASD) experience challenges with social communication, including recognizing and responding to non-verbal cues. The purpose of this study was to assess the efficacy of self-management combined with textual cues to teach adults with ASD to recognize and respond to nonverbal expressions of boredom and confusion during social conversation. A multiple baseline across participants design was used to assess the efficacy of this intervention for three participants. Results showed substantial gains across all participants in their recognition and responsiveness to the targeted nonverbal cues. Moreover, this skill maintained after the completion of intervention and generalized to novel conversation partners and settings with large effect sizes. The findings add to the literature base on interventions for adults with ASD, and further support the use of self-management and textual cues as effective intervention strategies for improving nonverbal communication.


Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


2021 ◽  
pp. 009365022199531
Author(s):  
Tess van der Zanden ◽  
Maria B. J. Mos ◽  
Alexander P. Schouten ◽  
Emiel J. Krahmer

This study investigates how online dating profiles, consisting of both pictures and texts, are visually processed, and how both components affect impression formation. The attractiveness of the profile picture was varied systematically, and texts either included language errors or not. By collecting eye tracking and perception data, we investigated whether picture attractiveness determines attention to the profile text and if the text plays a secondary role. Eye tracking results revealed that pictures are more likely to attract initial attention and that more attractive pictures receive more attention. Texts received attention regardless of the picture’s attractiveness. Moreover, perception data showed that both the pictorial and textual cues affect impression formation, but that they affect different dimensions of perceived attraction differently. Based on our results, a new multimodal information processing model is proposed, which suggests that pictures and texts are processed independently and lead to separate assessments of cue attractiveness before impression formation.


Author(s):  
Thangarajah Akilan ◽  
Amitha Thiagarajan ◽  
Bharathwaaj Venkatesan ◽  
Sowmiya Thirumeni ◽  
Sanjana Gurusamy Chandrasekaran

2020 ◽  
Vol 22 (10) ◽  
pp. 2684-2697
Author(s):  
Peilun Zhou ◽  
Tong Xu ◽  
Zhizhuo Yin ◽  
Dong Liu ◽  
Enhong Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document