scholarly journals Non-Verbal Means of Communication in the Representation of the Emotional State of Joy in Modern English Fictional Discourse

Author(s):  
Natalia Kyseliuk ◽  
Alla Hubina ◽  
Alla Martyniuk ◽  
Valentyna Tryndiuk

Non-Verbal Means of Communication in the Representation of the Emotional State of Joy in Modern English Fictional DiscourseThis article concerns the study of the non-verbal means of communication designating the emotional state of joy which are present in fictional discourse. The functional peculiarities of non-verbal means designating the emotional state of joy are specified. It is proved that the register of non-verbal components indicating joy includes facial expressions, gestures, phonation, and the like. The subsystems of kinesic and phonatory non-verbal components are the dominant ones. Fictional discourse including representations of joy contains both verbal and non-verbal means of designating the emotional state, functioning either separately or in interaction. This interaction is based on the concept of emotional valence and is represented in repetition, contradiction, substitution, complementation, emphasis and regulation. In an utterance, non-verbal means can occur in the initial, medial, and final positions. The most common models of the occurrence of non-verbal components denoting joy in utterances are revealed. Niewerbalne środki komunikacji w przedstawieniu emocjonalnego stanu radości we współczesnym angielskim dyskursie fikcyjnymArtykuł dotyczy badań niewerbalnych środków komunikacji na oznaczenie emocjonalnego stanu radości, które są obecne w dyskursie nienaukowym. Określono osobliwości funkcjonalne środków niewerbalnych, oznaczających stan emocjonalny radości. Udowodniono, że rejestr niewerbalnych elementów wskazujących na radość obejmuje mimikę, gesty, fonację i tym podobne. Podsystemy kinestetyczne i fonatory są w pozycji dominującej. Dyskurs fikcyjny, w tym reprezentacje radości zawierają zarówno werbalne, jak i niewerbalne środki na określenie stanu emocjonalnego, funkcjonującego oddzielnie lub w interakcji. Ta interakcja opiera się na koncepcji wartościowości emocjonalnej i jest reprezentowana przez powtórzenie, zaprzeczenie, zastąpienie, uzupełnienie, uwydatnienie i regulację. W wypowiedzi niewerbalne środki mogą występować w pozycjach początkowych, środkowych i końcowych.  Są ujawniane najbardziej powszechne modele występowania elementów niewerbalnych oznaczających radość w wypowiedziach.

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1051
Author(s):  
Si Jung Kim ◽  
Teemu H. Laine ◽  
Hae Jung Suk

Presence refers to the emotional state of users where their motivation for thinking and acting arises based on the perception of the entities in a virtual world. The immersion level of users can vary when they interact with different media content, which may result in different levels of presence especially in a virtual reality (VR) environment. This study investigates how user characteristics, such as gender, immersion level, and emotional valence on VR, are related to the three elements of presence effects (attention, enjoyment, and memory). A VR story was created and used as an immersive stimulus in an experiment, which was presented through a head-mounted display (HMD) equipped with an eye tracker that collected the participants’ eye gaze data during the experiment. A total of 53 university students (26 females, 27 males), with an age range from 20 to 29 years old (mean 23.8), participated in the experiment. A set of pre- and post-questionnaires were used as a subjective measure to support the evidence of relationships among the presence effects and user characteristics. The results showed that user characteristics, such as gender, immersion level, and emotional valence, affected their level of presence, however, there is no evidence that attention is associated with enjoyment or memory.


Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


2019 ◽  
Vol 72 (12) ◽  
pp. 2833-2847 ◽  
Author(s):  
Jasmine Virhia ◽  
Sonja A Kotz ◽  
Patti Adank

Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter’s emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer’s emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.


2012 ◽  
Vol 19 (1) ◽  
pp. 3-13
Author(s):  
Rafael A. M. Gonçalves ◽  
Diego R. Cueva ◽  
Marcos R. Pereira-Barretto ◽  
Fabio G. Cozman

2012 ◽  
Vol 29 (5) ◽  
pp. 533-541 ◽  
Author(s):  
Sylvain Clément ◽  
Audrey Tonini ◽  
Fatiha Khatir ◽  
Loris Schiaratura ◽  
Séverine Samson

in this study, we examined short and longer term effects of musical and cooking interventions on emotional well-being of severe Alzheimer's disease (AD) patients. These two pleasurable activities (i.e., listening to music, tasting sweets) that were collectively performed (i.e., playing music together, collaborative preparation of a cake) were compared in two groups of matched patients with AD (N = 14). Each intervention lasted four weeks (two sessions per week) and their effects were regularly assessed up to four weeks after the end of the intervention. We repeatedly evaluated the emotional state of both groups before, during, and after the intervention periods by analyzing discourse content and facial expressions from short filmed interviews as well as caregivers' judgments of mood. The results reveal short-term benefits of both music and cooking interventions on emotional state on all these measures, but long-term benefits were only evident after the music intervention. The present finding suggests that non-pharmacological approaches offer promising methods to improve the quality of life of patients with dementia and that music stimulation is particularly effective to produce long lasting effects on patients' emotional well-being.


Author(s):  
Jenni Anttonen ◽  
Veikko Surakka ◽  
Mikko Koivuluoma

The aim of the present paper was to study heart rate changes during a video stimulation depicting two actors (male and female) producing dynamic facial expressions of happiness, sadness, and a neutral expression. We measured ballistocardiographic emotion-related heart rate responses with an unobtrusive measurement device called the EMFi chair. Ratings of subjective responses to the video stimuli were also collected. The results showed that the video stimuli evoked significantly different ratings of emotional valence and arousal. Heart rate decelerated in response to all stimuli and the deceleration was the strongest during negative stimulation. Furthermore, stimuli from the male actor evoked significantly larger arousal ratings and heart rate responses than the stimuli from the female actor. The results also showed differential responding between female and male participants. The present results support the hypothesis that heart rate decelerates in response to films depicting dynamic negative facial expressions. The present results also support the idea that the EMFi chair can be used to perceive emotional responses from people while they are interacting with technology.


2019 ◽  
Vol 31 (2) ◽  
pp. 24-30
Author(s):  
Niarz J. Hussein

The present study was designed to measure both eye and nasal temperatures by stroking the animals’ body to determine positive emotional state in free-range Hamdani ewes. Twenty Hamdani ewes, aging 2-4 years, were used in this study. Focal sampling was used to collect data. Data were collected from both nose and eyes of animals. A total of 1680 temperature data, an average of 84 data from each ewe, were collected from all twenty ewes throughout the study. Ewes were stroked at the forehead, withers and neck for five minutes, temperature data were collected twice before, twice during and twice after stroking for both eyes and nose. Results revealed that there was a significant difference in eye temperature (P<0.01) as well as nasal temperature (P<0.05) between the three stages. Both eye and nasal temperatures were decreasing over time. In addition, the mean eye and nasal temperatures for all stopwatches were highly correlated (r = 0.94). From this study it could be concluded that peripheral (eye and nose) temperatures offer a useful understanding of changes in emotional valence in ewes.


Author(s):  
Intars Nikonovs ◽  
Juris Grants ◽  
Ivars Kravalis

<p><em>The aim of the research was<strong> </strong>to evaluate emotional state before, after and in the next day after the ski hiking. The distance was 24 km and lasted 8 hours. To assess the ski hiker’s emotions the following 3 tests were conducted – before the ski hike, after the ski hike and 16 hours after the hike. Emotional state was set by the two different methods. One included assessing participant’s dynamic of emotional state with subjective measurement using questioner, but other using subjective method by analyzing person’s facial expressions.   The results showed improved emotional state in both ways. Although, using objective method, improved positive emotions were more in the next day of the ski hike. <strong></strong></em></p>


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


Sign in / Sign up

Export Citation Format

Share Document