Emotional Models

2020 ◽  
Vol 11 (2) ◽  
pp. 1-18
Author(s):  
Rana Fathalla

Emotion modeling has gained attention for almost two decades now due to the rapid growth of affective computing (AC). AC aims to detect and respond to the end-user's emotions by devices and computers. Despite the hard efforts being directed to emotion modeling with numerous tries to build different models of emotions, emotion modeling remains an art with a lack of consistency and clarity regarding the exact meaning of emotion modeling. This review deconstructs the vagueness of the term ‘emotion modeling' by discussing the various types and categories of emotion modeling, including computational models and its categories—emotion generation and emotion effects—and emotion representation models and its categories—categorical, dimensional, and componential models. This review deals with applications associated with each type of emotion model including artificial intelligence and robotics architecture, computer-human interaction applications of the computational models, and emotion classification and affect-aware applications such as video games and tutoring systems applications of emotion representation models.

2013 ◽  
Vol 859 ◽  
pp. 602-607
Author(s):  
Nan Xiang ◽  
Li Li Yang

Affective computing had been widely used in computer engineering and applications fields. Emotion generation is an important research component in affective computing field and there were a lot of works had been put into generating lifelike emotion reaction and emotional behaviors [1-. OCC model is the most common used emotion model and can be integrated with other component to generate virtual humans emotion states. However


Author(s):  
Nik Thompson ◽  
Tanya Jane McGill

This chapter discusses the domain of affective computing and reviews the area of affective tutoring systems: e-learning applications that possess the ability to detect and appropriately respond to the affective state of the learner. A significant proportion of human communication is non-verbal or implicit, and the communication of affective state provides valuable context and insights. Computers are for all intents and purposes blind to this form of communication, creating what has been described as an “affective gap.” Affective computing aims to eliminate this gap and to foster the development of a new generation of computer interfaces that emulate a more natural human-human interaction paradigm. The domain of learning is considered to be of particular note due to the complex interplay between emotions and learning. This is discussed in this chapter along with the need for new theories of learning that incorporate affect. Next, the more commonly applicable means for inferring affective state are identified and discussed. These can be broadly categorized into methods that involve the user’s input and methods that acquire the information independent of any user input. This latter category is of interest as these approaches have the potential for more natural and unobtrusive implementation, and it includes techniques such as analysis of vocal patterns, facial expressions, and physiological state. The chapter concludes with a review of prominent affective tutoring systems in current research and promotes future directions for e-learning that capitalize on the strengths of affective computing.


2021 ◽  
Author(s):  
Intissar Khalifa ◽  
Ridha Ejbali ◽  
Raimondo Schettini ◽  
Mourad Zaied

Abstract Affective computing is a key research topic in artificial intelligence which is applied to psychology and machines. It consists of the estimation and measurement of human emotions. A person’s body language is one of the most significant sources of information during job interview, and it reflects a deep psychological state that is often missing from other data sources. In our work, we combine two tasks of pose estimation and emotion classification for emotional body gesture recognition to propose a deep multi-stage architecture that is able to deal with both tasks. Our deep pose decoding method detects and tracks the candidate’s skeleton in a video using a combination of depthwise convolutional network and detection-based method for 2D pose reconstruction. Moreover, we propose a representation technique based on the superposition of skeletons to generate for each video sequence a single image synthesizing the different poses of the subject. We call this image: ‘history pose image’, and it is used as input to the convolutional neural network model based on the Visual Geometry Group architecture. We demonstrate the effectiveness of our method in comparison with other methods in the state of the art on the standard Common Object in Context keypoint dataset and Face and Body gesture video database.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


2021 ◽  
Vol 335 ◽  
pp. 04001
Author(s):  
Didar Dadebayev ◽  
Goh Wei Wei ◽  
Tan Ee Xion

Emotion recognition, as a branch of affective computing, has attracted great attention in the last decades as it can enable more natural brain-computer interface systems. Electroencephalography (EEG) has proven to be an effective modality for emotion recognition, with which user affective states can be tracked and recorded, especially for primitive emotional events such as arousal and valence. Although brain signals have been shown to correlate with emotional states, the effectiveness of proposed models is somewhat limited. The challenge is improving accuracy, while appropriate extraction of valuable features might be a key to success. This study proposes a framework based on incorporating fractal dimension features and recursive feature elimination approach to enhance the accuracy of EEG-based emotion recognition. The fractal dimension and spectrum-based features to be extracted and used for more accurate emotional state recognition. Recursive Feature Elimination will be used as a feature selection method, whereas the classification of emotions will be performed by the Support Vector Machine (SVM) algorithm. The proposed framework will be tested with a widely used public database, and results are expected to demonstrate higher accuracy and robustness compared to other studies. The contributions of this study are primarily about the improvement of the EEG-based emotion classification accuracy. There is a potential restriction of how generic the results can be as different EEG dataset might yield different results for the same framework. Therefore, experimenting with different EEG dataset and testing alternative feature selection schemes can be very interesting for future work.


Author(s):  
Sheldon Schiffer

Video game non-player characters (NPCs) are a type of agent that often inherits emotion models and functions from ancestor virtual agents. Few emotion models have been designed for NPCs explicitly, and therefore do not approach the expressive possibilities available to live-action performing actors nor hand-crafted animated characters. With distinct perspectives on emotion generation from multiple fields within narratology and computational cognitive psychology, the architecture of NPC emotion systems can reflect the theories and practices of performing artists. This chapter argues that the deployment of virtual agent emotion models applied to NPCs can constrain the performative aesthetic properties of NPCs. An actor-centric emotion model can accommodate creative processes for actors and may reveal what features emotion model architectures should have that are most useful for contemporary game production of photorealistic NPCs that achieve cinematic acting styles and robust narrative design.


Author(s):  
Ephraim Nissan

In order to visualize argumentation, there exist tools from multimedia. The most advanced sides of computational modeling of arguments belong in models and tools upstream of visualization tools: the latter are an interface. Computer models of argumentation come in three categories: logic-based (highly theoretical), probablistic, and pragmatic ad hoc treatments. Theoretical formalisms of argumentation were developed by logicists within artificial intelligence (and were implemented and often can be reused outside the original applications), or then the formalisms are rooted in philosophers’ work. We cite some such work, but focus on tools that support argumentation visually. Argumentation turns out in a wide spectrum of everyday life situations, including professional ones. Computational models of argumentation have found application in tutoring systems, tools for marshalling legal evidence, and models of multiagent communication. Intelligent systems and other computer tools potentially stand to benefit as well. Multimedia are applied to argumentation (in visualization tools), and also are a promising field of application (in tutoring systems). The design of networks could potentially benefit, if communication is modeled using multiagent technology.


2015 ◽  
Vol 6 (2) ◽  
pp. 35-56 ◽  
Author(s):  
Shikha Jain ◽  
Krishna Asawa

Extensive studies established the existence of a close interaction between emotion and cognition with remarkable influence of the emotion on all sorts of cognitive process. Consequently, technologies that emulate human intelligent behavior cannot be thought completely intelligent without incorporating interference of emotional component in the rational reasoning processes. Recently, several researchers have been started working in the field of emotion modeling to cater the need of interactive computer applications that demand human-like interaction with the computer. However, due to the absence of structured guidelines, the most challenging task for the researcher is to understand and select the most appropriate definitions, theories and processes governing the human psychology to design the intended model. The objective of the present article is to review the background scenario and necessary studies for designing emotion model for a computer machine so that it could generate appropriate synthetic emotions while interacting with the external environmental factors.


Author(s):  
William A. Janvier ◽  
Claude Ghaoui

HCI-related subjects need to be considered to make e-learning more effective; examples of such subjects are: psychology, sociology, cognitive science, ergonomics, computer science, software engineering, users, design, usability evaluation, learning styles, teaching styles, communication preference, personality types, and neuro-linguistic programming language patterns. This article discusses the way some components of HI can be introduced to increase the effectiveness of e-learning by using an intuitive interactive e-learning tool that incorporates communication preference (CP), specific learning styles (LS), neurolinguistic programming (NLP) language patterns, and subliminal text messaging. The article starts by looking at the current state of distance learning tools (DLTs), intelligent tutoring systems (ITS) and “the way we learn”. It then discusses HI and shows how this was implemented to enhance the learning experience.


Sign in / Sign up

Export Citation Format

Share Document