Creating Synthetic Emotions through Technological and Robotic Advancements
Latest Publications


TOTAL DOCUMENTS

9
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466615953, 9781466615960

Author(s):  
Neha Khetrapal

This paper discusses the implications of the embodied approach for understanding emotional processing in autism and the consequent application of this approach for robotics. In this pursuit, author contrasts the embodied approach with the traditional amodal approach in cognitive science and highlights the gaps in understanding. Other important issues on intentionality, intelligence and autonomy are also raised. The paper also advocates a better integration of disciplines for advancing the understanding of emotional processing in autism and deploying cognitive robotics for the purpose of developing the embodied approach further.


Author(s):  
Joanna J. Bryson ◽  
Emmanuel Tanguy

Human intelligence requires decades of full-time training before it can be reliably utilised in modern economies. In contrast, AI agents must be made reliable but interesting in relatively short order. Realistic emotion representations are one way to ensure that even relatively simple specifications of agent behaviour will be expressed with engaging variation, and those social and temporal contexts can be tracked and responded to appropriately. We describe a representation system for maintaining an interacting set of durative states to replicate emotional control. Our model, the Dynamic Emotion Representation (DER), integrates emotional responses and keeps track of emotion intensities changing over time. The developer can specify an interacting network of emotional states with appropriate onsets, sustains and decays. The levels of these states can be used as input for action selection, including emotional expression. We present both a general representational framework and a specific instance of a DER network constructed for a virtual character. The character’s DER uses three types of emotional state as classified by duration timescales, keeping with current emotional theory. We demonstrate the system with a virtual actor. We also demonstrate how even a simplified version of this representation can improve goal arbitration in autonomous agents.


Author(s):  
Seiji Inokuchi

This paper gives a historical review of Kansei-based media technologies in Japan. Kansei is a Japanese word, the meaning of which covers sensibility, sentiment, emotion, and feeling. Kansei research started in the field of music, because music is the most acceptable of the arts to computer science. In the 1990s, the applications of Kansei machine vision became widespread in many industrial fields, including electronic production, automobile manufacture, steel-making, the chemical industry, the food industry, and office appliances, among others. Kansei technologies are also applied to human interface systems, including the field of brain science, for human communication.


Author(s):  
Daniel S. Levine ◽  
Leonid I. Perlovsky

Theories of cognitive processes, such as decision making and creative problem solving, for a long time neglected the contributions of emotion or affect in favor of analysis based on use of deliberative rules to optimize performance. Since the 1990s, emotion has increasingly been incorporated into theories of these cognitive processes. Some theorists have in fact posited a “dual-systems approach” to understanding decision making and high-level cognition. One system is fast, emotional, and intuitive, while the other is slow, rational, and deliberative. However, one’s understanding of the relevant brain regions indicate that emotional and rational processes are deeply intertwined, with each exerting major influences on the functioning of the other. Also presented in this paper are neural network modeling principles that may capture the interrelationships of emotion and cognition. The authors also review evidence that humans, and possibly other mammals, possess a “knowledge instinct,” which acts as a drive to make sense of the environment. This drive typically incorporates a strong affective component in the form of aesthetic fulfillment or dissatisfaction.


Author(s):  
Hatice Gunes ◽  
Maja Pantic

Recognition and analysis of human emotions have attracted a lot of interest in the past two decades and have been researched extensively in neuroscience, psychology, cognitive sciences, and computer sciences. Most of the past research in machine analysis of human emotion has focused on recognition of prototypic expressions of six basic emotions based on data that has been posed on demand and acquired in laboratory settings. More recently, there has been a shift toward recognition of affective displays recorded in naturalistic settings as driven by real world applications. This shift in affective computing research is aimed toward subtle, continuous, and context-specific interpretations of affective displays recorded in real-world settings and toward combining multiple modalities for analysis and recognition of human emotion. Accordingly, this paper explores recent advances in dimensional and continuous affect modelling, sensing, and automatic recognition from visual, audio, tactile, and brain-wave modalities.


Author(s):  
Karla Parussel

It is hypothesized here that two classes of emotions exist: driving and satisfying emotions. Driving emotions significantly increase the internal activity of the brain and result in the agent seeking to minimize its emotional state by performing actions that it would not otherwise do. Satisfying emotions decrease internal activity and encourage the agent to continue its current behavior to maintain its emotional state. It is theorized that neuromodulators act as simple yet high impact signals to either agitate or calm specific neural networks. This results in what we can define as either driving or satisfying emotions. The plausibility of this hypothesis is tested in this paper using feed-forward networks of leaky integrate-and-fire neurons.


Author(s):  
Tatsuya Nomura ◽  
Kazuma Saeki

A psychological experiment was conducted to straightforwardly investigate the effects of polite behaviors expressed by robots in Japan, using a small-sized humanoid robot that performed four types of behaviors with voice task instructions. Results of the experiment suggested that the subjects who experienced “deep bowing” motion of the robot felt it more extrovert than those who experienced “just standing” motion. Subjects who experienced “lying” motion of the robot felt the robot less polite than those who experienced the other motions. Female subjects more strongly feeling the robot extrovert replied for the task instruction from the robot faster, although no such trend was found in the male subjects. However, the male subjects who did not perform the task felt the robot less polite than the male subjects who performed the task and the female subjects who did not perform the task.


Author(s):  
Joost Broekens

Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers, resulting in work that is widely published. The majority of this work consists of emotion recognition technology, computational modeling of causal factors of emotion and emotion expression in virtual characters and robots. A smaller part is concerned with modeling the effects of emotion on cognition and behavior, formal modeling of cognitive appraisal theory and models of emergent emotions. Part of the motivation for affective computing as a field is to better understand emotion through computational modeling. In psychology, a critical and neglected aspect of having emotions is the experience of emotion: what does the content of an emotional episode look like, how does this content change over time and when do we call the episode emotional. Few modeling efforts in affective computing have these topics as a primary focus. The launch of a journal on synthetic emotions should motivate research initiatives in this direction, and this research should have a measurable impact on emotion research in psychology. In this paper, I show that a good way to do so is to investigate the psychological core of what an emotion is: an experience. I present ideas on how computational modeling of emotion can help to better understand the experience of emotion, and provide evidence that several computational models of emotion already address the issue.


Author(s):  
Jordi Vallverdú ◽  
Huma Shah ◽  
David Casacuberta

Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.


Sign in / Sign up

Export Citation Format

Share Document