Functionalist Emotion Model in NARS

Author(s):  
Xiang Li ◽  
Patrick Hammer ◽  
Pei Wang ◽  
Hongling Xie
Keyword(s):  
2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Aaron Frederick Bulagang ◽  
James Mountstephens ◽  
Jason Teo

Abstract Background Emotion prediction is a method that recognizes the human emotion derived from the subject’s psychological data. The problem in question is the limited use of heart rate (HR) as the prediction feature through the use of common classifiers such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN) and Random Forest (RF) in emotion prediction. This paper aims to investigate whether HR signals can be utilized to classify four-class emotions using the emotion model from Russell’s in a virtual reality (VR) environment using machine learning. Method An experiment was conducted using the Empatica E4 wristband to acquire the participant’s HR, a VR headset as the display device for participants to view the 360° emotional videos, and the Empatica E4 real-time application was used during the experiment to extract and process the participant's recorded heart rate. Findings For intra-subject classification, all three classifiers SVM, KNN, and RF achieved 100% as the highest accuracy while inter-subject classification achieved 46.7% for SVM, 42.9% for KNN and 43.3% for RF. Conclusion The results demonstrate the potential of SVM, KNN and RF classifiers to classify HR as a feature to be used in emotion prediction in four distinct emotion classes in a virtual reality environment. The potential applications include interactive gaming, affective entertainment, and VR health rehabilitation.


2016 ◽  
Vol 33 (4) ◽  
pp. 472-492 ◽  
Author(s):  
Yading Song ◽  
Simon Dixon ◽  
Marcus T. Pearce ◽  
Andrea R. Halpern

Music both conveys and evokes emotions, and although both phenomena are widely studied, the difference between them is often neglected. The purpose of this study is to examine the difference between perceived and induced emotion for Western popular music using both categorical and dimensional models of emotion, and to examine the influence of individual listener differences on their emotion judgment. A total of 80 musical excerpts were randomly selected from an established dataset of 2,904 popular songs tagged with one of the four words “happy,” “sad,” “angry,” or “relaxed” on the Last.FM web site. Participants listened to the excerpts and rated perceived and induced emotion on the categorical model and dimensional model, and the reliability of emotion tags was evaluated according to participants’ agreement with corresponding labels. In addition, the Goldsmiths Musical Sophistication Index (Gold-MSI) was used to assess participants’ musical expertise and engagement. As expected, regardless of the emotion model used, music evokes emotions similar to the emotional quality perceived in music. Moreover, emotion tags predict music emotion judgments. However, age, gender and three factors from Gold-MSI, importance, emotion, and music training were found not to predict listeners’ responses, nor the agreement with tags.


2015 ◽  
Vol 2015 ◽  
pp. 1-4
Author(s):  
Kuo-Kuang Fan ◽  
Shuh-Yeuan Deng ◽  
Chung-Ho Su ◽  
Fu-Yuan Cheng

Emotions have a very important impact on human’s beliefs, motivations, actions, and physical states. Emotions predicting and its application in intelligent system can improve the interaction between humans and machines. Current research in artificial emotion focuses on how to measure, calculate, or compute it. However, the transfer of emotion is often too complicated to present full emotion states and changes. This paper combines with emotional dimension and theory of variable fuzzy sets to present a predicting artificial emotion model and shows illustrated example of it. This study shows that any raw data from input can be computed with variable fuzzy set. It provides a mathematical method for representing emotion quantitative, gradual qualitative, and mutated qualitative change. This framework improves calculation methods and mechanisms, closer to real emotional changes.


Author(s):  
Long Qin ◽  
Zhen-Hua Ling ◽  
Yi-Jian Wu ◽  
Bu-Fan Zhang ◽  
Ren-Hua Wang

Author(s):  
Sheldon Schiffer

Video game non-player characters (NPCs) are a type of agent that often inherits emotion models and functions from ancestor virtual agents. Few emotion models have been designed for NPCs explicitly, and therefore do not approach the expressive possibilities available to live-action performing actors nor hand-crafted animated characters. With distinct perspectives on emotion generation from multiple fields within narratology and computational cognitive psychology, the architecture of NPC emotion systems can reflect the theories and practices of performing artists. This chapter argues that the deployment of virtual agent emotion models applied to NPCs can constrain the performative aesthetic properties of NPCs. An actor-centric emotion model can accommodate creative processes for actors and may reveal what features emotion model architectures should have that are most useful for contemporary game production of photorealistic NPCs that achieve cinematic acting styles and robust narrative design.


Sign in / Sign up

Export Citation Format

Share Document