“Affective Computing” Is Not an Oxymoron

Author(s):  
Rosalind W. Picard ◽  
Adolfo Plasencia

In this dialogue, the scientist Rosalind W. Picard from MIT Media Lab begins by explaining why the expression "Affective computing" is not an oxymoron, and describes how they are trying to bridge the gap between information systems and human emotions in her laboratory. She details  how they are attempting to give computers and digital machines better abilities so that they can “see” the emotions of their users, and outlines what a machine would have to be like to pass the Turing ‘emotions’ test. Rosalind goes on to describe why emotion is part of all communication, even when the communication itself might not explicitly have emotion in it, arguing that consciousness also involves feelings that cannot be expressed and why emotional experience is an essential part of the normal functioning of the conscious system. Later she outlines her research in affective computing, where they managed to measure signals using a sensor that responds to some human emotion or feelings, and explains how technology can become a sort of ‘affective prosthesis’ to help the disabled, and people with difficulties, in understanding and handling emotions.

2013 ◽  
Vol 3 (3) ◽  
pp. 62-75
Author(s):  
Hao-Chiang Koong Lin ◽  
Cong Jie Sun ◽  
Bei Ni Su ◽  
Zu An Lin

All kinds of arts have the chance to be represented in digital forms, and one of them is the sound art, including ballads by word of mouth, classical music, religious music, popular music and emerging computer music. Recently, affective computing has drowned a lot of attention in the academic field, and it has two parts: physiology and psychology. Through a variety of sensing devices, the authors can get behaviors which are represented by feelings and emotions. Therefore, the authors may not only identify but also understand human emotions. This work focuses on exploring and producing the MAX/MSP computer program which can generate the emotional music automatically. It can also recognize the emotion identified when users play MIDI instruments and create visual effects. The authors hope to achieve two major goals: (1) Producing the performance of art combined with dynamic vision and auditory tune. (2) Making computers understand human emotions and interact with music by affective computing. The results of this study are as follows:(1) The authors design a corresponding mechanism of music tone and human emotion recognition. (2) The authors develop a combination of affective computing and the auto music generator. (3) The authors design a music system which can be used with MIDI instrument and also be incorporated with other music effects to add the Musicality. (4) The authors Assess and complete the emotion discrimination mechanism of how mood music can feedback accurately. The authors make computers simulate (even have) human emotion, and obtain relevant basis for more accurate sound feedback. The authors use System Usability Scale to analyze and discuss about the usability of the system. Also, the average score of each item is obviously higher than the simple score (four points) for the overall response and the performance of music when we use “auto mood music generator”. There are average performance which is more than five points in each part of Interaction and Satisfaction Scale. Subjects are willing to accept this interactive work, so it proves that the work has the usability and the potential which the authors can keep developing on.


2007 ◽  
Vol 40 (2) ◽  
pp. 543-544
Author(s):  
Ann Ward

Political Emotions: Aristotle and the Symphony of Reason and Emotion, Marlene K. Sokolon, DeKalb: Northern Illinois University Press, 2006, pp. 217.Marlene K. Sokolon has provided an intellectually stimulating and highly original work on Aristotle's understanding of the emotions, mainly as presented in his treatise the Art of Rhetoric. The central thesis of Sokolon's book manifests itself in her analysis of the emotion of anger. According to Sokolon, for Aristotle anger is the paradigmatic human emotion, defined as the desire for revenge for a dishonourable and undeserving public insult against oneself or those one loves. Of this desire for revenge, Sokolon argues that “for Aristotle, unique human anger is not ‘at’ something, but more properly ‘with’ what some other person did or intends to do. Anger and the other political emotions are certain kinds of judgments or perceptions about sociopolitical circumstances. Anger judges specific kinds of events with an acknowledged political, or what we now call ‘cultural,’ meaning” (p. 55). Thus, Sokolon argues that for Aristotle the emotional experience of anger occurs in social and political contexts where there are evaluations of worth in situations involving relations of power. But if anger is the paradigmatic human emotion, this means that anger is not simply representative of various political emotions, but illustrates that human emotion as such is an essentially political phenomenon. Sokolon's thesis, therefore, is that for Aristotle, “man is by nature a political animal” not simply because he possesses reason, the apparent claim of the Politics, but also because he experiences emotions.


2013 ◽  
Vol 465-466 ◽  
pp. 682-687
Author(s):  
Fairul Azni Jafar ◽  
Nurhidayu Abdullah ◽  
Noraidah Blar ◽  
M.N. Muhammad ◽  
Anuar Mohamed Kassim

In order for humans and robots to interact in an effective and instinctive manner, robots must obtain information about the human emotional state in response to the robots actions. This is important as the presence of robot in manufacturing industry is very wide and robot plays a big role in the emerging of automation manufacturing technology. Consequently, we believed that it is necessary to investigate how human feel about this situation and if robot can understand those human emotions, collaboration with human can be much better. In order to investigate the human emotions, we applied akanseisurvey method based on akanseiengineering technology. We request a number of participants to take part in our experiment where they will be in the same environment of where a robot is working on some tasks. The participants will answer those questions in the survey based on what they feel about working together with moving robot. The overall goal is, in fact, to predict in which area in the vicinity of the robot that the human is heading to, especially in term of humans feeling, so that by understanding how human feels of working together with robots, perhaps we can create a better working environment. This paper describes the results of our findings about how human feel when collaborating with robot (s).


Author(s):  
Tom Adi

A new theory of emotions is derived from the semantics of the language of emotions. The sound structures of 36 Old Arabic word roots that express specific emotions are converted into abstract models. By substitution from two tables, abstract models are converted into concrete theories about the nature of the specific emotions that are likely to be validated. Theories confirmed by the author’s own emotional experience (self reports), and by previously corroborated theories, are considered corroborated. These theories about specific emotions are woven together into an integrated theory of all emotions. The theory models emotions and emotional mechanisms, dimensions and polarities in ways amenable to affective computing. The findings are supported by clinical psychology. Old Arabic is chosen because its words, sounds and meanings are consistent and have not changed for at least 1,400 years. The theory can be expanded by incorporating additional emotional word roots from Arabic and other alphabetical languages.


Author(s):  
Annamaria Curatola ◽  
Felice Corona ◽  
Carmelo Francesco Meduri ◽  
Carla Cozzarelli

This experience of psycho-emotional education is part of more extensive international researches based on the hypothesis that the “emotional experience”, if inserted in the daily conduct of the school curriculum, especially in the nursery school one, represents an excellent training opportunity, since it fosters the learners' best perception of the self, thus strengthening their expressive and communicative attitude. On the basis of Social and Emotional Learning (SEL) principles and inspired by a previous experience carried out by the Department of Human Science for training, this experimental project has been put into practice by some nursery schools in RC, thus providing very interesting data for the confirmation of the hypothesis. It has been also developed a study on the affective computing and the cognitive computing pursuing a new perspective that exceeds the traditional vision of what is defined as artificial intelligence and analyzes intelligence and aspects of perceptions, often neglected, with a methodological approach considering the emotional processes as important as the cognitive ones.


Author(s):  
N Korsten ◽  
JG Taylor

In order to achieve ‘affective computing’ it is necessary to know what is being computed. That is, in order to compute with what would pass for human emotions, it is necessary to have a computational basis for the emotions themselves. What does it mean quantitatively if a human is sad or angry? How is this affective state computed in their brain? It is this question, on the very core of the computational nature of the human emotions, which is addressed in this chapter. A proposal will be made as to this computational basis based on the well established approach to emotions as arising from an appraisal of a given situation or event by a specific human being.


Author(s):  
Dylan Evans

The most recent discipline to have entered the debate on emotion is artificial intelligence. Since the early 1990s, computer scientists have become increasingly interested in building systems and devices that can recognize and simulate human emotions, and workers in robotics are already making some progress in this area. ‘The computer that cried’ discusses recent developments in affective computing and speculates on where it will lead. Will we succeed in building robots that have feelings just like we do? What might be the consequences of such technology? We may find that building artificial life forms with emotions—either virtual agents in a simulated world or real physical robots—helps us to understand more about our own emotions.


Author(s):  
Ting-Mei Li ◽  
Han-Chieh Chao ◽  
Jianming Zhang

AbstractBrain wave emotion analysis is the most novel method of emotion analysis at present. With the progress of brain science, it is found that human emotions are produced by the brain. As a result, many brain-wave emotion related applications appear. However, the analysis of brain wave emotion improves the difficulty of analysis because of the complexity of human emotion. Many researchers used different classification methods and proposed methods for the classification of brain wave emotions. In this paper, we investigate the existing methods of brain wave emotion classification and describe various classification methods.


2017 ◽  
Vol 21 (02) ◽  
pp. 453-474
Author(s):  
Subhash Kumar

Media Lab Asia (MLA) has been incepted in 2001 with the collaboration of MIT Media Lab and Department of Electronics & Information Technology (DEITY), Government of India. It is working on the paradigm of collaborative research from the lab to land in developing and operationalizing technologies to bridge the gap through educating, equipping and empowering common man. MLA is working in four sectors: livelihood, healthcare, empowerment of the disabled and education. MLA is successful in collaborating with Research & Development (R&D) organizations, institutions in Government, Non-Governmental Organizations (NGOs), academia and industry. A long list of collaborators of MLA includes 59 partner agencies. MLA role, however, had confined to provide funding to the partner agencies. The collaborative organization develops product, tests and launches the projects. The reach of the project touches major states in India. eGalla, Chic, mDhanwanthari, and Sehat-Saathi are some of the projects developed by MLA and collaborators. eGalla is a retail management software, and Chic is developed to simplify the traditional craft for livelihood generation. mDhanwanthari and Sehat-Saathi are based on healthcare to rural communities. MLA has developed 75 projects since its inception. The projects have reached the beneficiary but lack in scale and commercialization. The parameter of success for MLA includes the potential for commercialization of the products or projects and self-sustaining mechanism of the impact of these products and projects. There are not many obligations for commercial success being a Section 25 company; however, a self-sustaining mechanism was critical. DEITY, the parent organization, has sought external support to develop a new business model to overcome the limitation.


2019 ◽  
Vol 9 (1) ◽  
pp. 308-317 ◽  
Author(s):  
Franziska Hirt ◽  
Egon Werlen ◽  
Ivan Moser ◽  
Per Bergamin

AbstractMeasuring emotions non-intrusively via affective computing provides a promising source of information for adaptive learning and intelligent tutoring systems. Using non-intrusive, simultaneous measures of emotions, such systems could steadily adapt to students emotional states. One drawback, however, is the lack of evidence on how such modern measures of emotions relate to traditional self-reports. The aim of this study was to compare a prominent area of affective computing, facial emotion recognition, to students’ self-reports of interest, boredom, and valence. We analyzed different types of aggregation of the simultaneous facial emotion recognition estimates and compared them to self-reports after reading a text. Analyses of 103 students revealed no relationship between the aggregated facial emotion recognition estimates of the software FaceReader and self-reports. Irrespective of different types of aggregation of the facial emotion recognition estimates, neither the epistemic emotions (i.e., boredom and interest), nor the estimates of valence predicted the respective self-report measure. We conclude that assumptions on the subjective experience of emotions cannot necessarily be transferred to other emotional components, such as estimated by affective computing. We advise to wait for more comprehensive evidence on the predictive validity of facial emotion recognition for learning before relying on it in educational practice.


Sign in / Sign up

Export Citation Format

Share Document