Emotion Generation Model with Growth Functions for Robots

Author(s):  
Miho Harata ◽  
◽  
Masataka Tokumaru ◽  

In this paper, we propose an emotion model with growth functions for robots. Many emotion models for robots have been developed using Neural Networks (NN), which focus on the functions of emotion recognition, control, and expression. One problem that affects these emotion models for robots is the development of a “simplified” emotion generation algorithm. Users readily lose interest in “simple” systems. Most models have attempted to generate complex emotional expressions, whereas no previous studies have considered the “growth of a robot.” Therefore, we propose a growth model for emotions based on changes in the network structure of a self-organizing map. We also applied a multilayer perceptron NN to generate more sophisticated expressions of emotion using growth functions. This model generated a similar behavior to the concept of affective change described in genetic psychology. Our results showed that this emotion model was more suitable for producing a robot with growth functions based on a psychological model.

Author(s):  
Sheldon Schiffer

Video game non-player characters (NPCs) are a type of agent that often inherits emotion models and functions from ancestor virtual agents. Few emotion models have been designed for NPCs explicitly, and therefore do not approach the expressive possibilities available to live-action performing actors nor hand-crafted animated characters. With distinct perspectives on emotion generation from multiple fields within narratology and computational cognitive psychology, the architecture of NPC emotion systems can reflect the theories and practices of performing artists. This chapter argues that the deployment of virtual agent emotion models applied to NPCs can constrain the performative aesthetic properties of NPCs. An actor-centric emotion model can accommodate creative processes for actors and may reveal what features emotion model architectures should have that are most useful for contemporary game production of photorealistic NPCs that achieve cinematic acting styles and robust narrative design.


2013 ◽  
Vol 461 ◽  
pp. 618-622
Author(s):  
Chuan Wan ◽  
Yan Tao Tian

Affective computing is an indispensable aspect in harmonious human-computer interaction and artificial intelligence. Making computers have the ability of generating emotions is a challenging task of affective computing. Affective Computing and Artificial Psychology are new research fields that involve computer and emotions, they have the same key research aspect, affective modeling. The paper introduces the basic affective elements, and the representation of affections in a computer. And we will describe an emotion generation model for a multimodal virtual human. The relationship among the emotion, mood and personality are discussed, and the PAD emotion space is used to define the emotion and the mood. We obtain the strength information of each expression component through fuzzy recognition of facial expressions based on Ekman six expression classifications, and take this information as a signal motivating emotion under the intensity-based affective model. Finally, a 3D virtual Human head with facial expressions is designed to show the emotion generation outputs. Experimental results demonstrate that the emotion generation intensity-based model works effectively and meets the basic principle of human emotion generation.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xinmei Zhang

Music is an indispensable part of our life and study and is one of the most important forms of multimedia applications. With the development of deep learning and neural network in recent years, how to use cutting-edge technology to study and apply music has become a research hotspot. Music waveform is not only the main form of music frequency but also the basis of music feature extraction. This paper first designs a method of note extraction based on the fast Fourier transform principle of the audio signal packet route under the self-organizing map (SOM neural network) which can accurately extract the musical features of the note, such as amplitude, loudness, period, and so on. Secondly, the audio segments are divided into summary by adding window moving matching method, and the music features such as amplitude, loudness, and period of each bar are obtained according to the performance of audio signal in each bar. Finally, according to the similarity of the audio music theory of the adjacent summary of each bar, the audio segments are divided, and the music features of each segment are obtained. The traditional recurrent neural network (RNN) is improved, and the SOM neural network is used to recognize the audio emotion features. The final experimental results show that the proposed method based on SOM neural network and big data can effectively extract and analyze music waveform features. Compared with previous studies, this paper creatively proposed a new algorithm, which can more accurately and quickly extract and analyze the data sound waveform, and used SOM neural network to analyze the emotion model contained in music for the first time.


2015 ◽  
Vol 27 (5) ◽  
pp. 563-570 ◽  
Author(s):  
Wisanu Jitviriya ◽  
◽  
Masato Koike ◽  
Eiji Hayashi

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270005/13.jpg"" width=""300"" /> Behavioral/emotional expression system</div> In our research, we have focused on investigating the application of brain-inspired technology by developing a robot with consciousness resembling that of a human being. The goal was to enhance intelligent behavior/emotion, and to facilitate communication between human beings and robots. We sought to increase the robot’s behavioral/emotional intelligence capabilities so that it could distinguish, adapt and react to changes in the environment. In this paper, we present a behavioral/emotional expression system designed to work automatically by two processes. The first is a classification of behavior and emotions by determining the winner node based on Self-Organizing Map (SOM) learning. For the second, we propose a stochastic emotion model based on Markov theory in which the probabilities of emotional state transition are updated with affective factors. Finally, we verified this model with a conscious behavior robot (Conbe-I), and confirmed the effectiveness of the proposed system with the experimental results in a realistic environment. </span>


2020 ◽  
Vol 4 (2) ◽  
pp. 59-69
Author(s):  
Leeveshkumar Pokhun ◽  
M Yasser Chuttur

Several studies have used different techniques to detect and identify emotions expressed in various sets of texts corpora. In this paper, we review different emotion models, emotion datasets and the corresponding techniques used for emotion analysis in past studies. We observe that researchers have been using a wide variety of techniques to detect emotions in texts and that there is currently no gold standard on which dataset or which emotion model to use. Consequently, although the field of emotion analysis has gained much momentum in previous years, there seems to be little progress into relevant research with findings that may be useful in real world applications. From our analysis and findings, we urge researchers to consider the development of datasets, evaluation benchmarks and a common platform for sharing achievements in emotion analysis to see further development in the field.


2011 ◽  
Vol 15 (2) ◽  
pp. 159-173 ◽  
Author(s):  
Jonna K. Vuoskoski ◽  
Tuomas Eerola

Most previous studies investigating music-induced emotions have applied emotion models developed in other fields to the domain of music. The aim of this study was to compare the applicability of music-specific and general emotion models – namely the Geneva Emotional Music Scale (GEMS), and the discrete and dimensional emotion models – in the assessment of music-induced emotions. A related aim was to explore the role of individual difference variables (such as personality and mood) in music-induced emotions, and to discover whether some emotion models reflect these individual differences more strongly than others. One hundred and forty-eight participants listened to 16 film music excerpts and rated the emotional responses evoked by the music excerpts. Intraclass correlations and Cronbach alphas revealed that the overall consistency of ratings was the highest in the case of the dimensional model. The dimensional model also outperformed the other two models in the discrimination of music excerpts, and principal component analysis revealed that 89.9% of the variance in the mean ratings of all the scales (in all three models) was accounted for by two principal components that could be labelled as valence and arousal. Personality-related differences were the most pronounced in the case of the discrete emotion model. Personality, mood, and the emotion model used were also associated with the intensity of experienced emotions. Implications for future music and emotion studies are raised concerning the selection of an appropriate emotion model when measuring music-induced emotions.


Sign in / Sign up

Export Citation Format

Share Document