Lifelike Pedagogical Agents and Affective Computing: An Exploratory Synthesis

Author(s):  
Clark Elliott ◽  
Jeff Rickel ◽  
James Lester
AI Magazine ◽  
2018 ◽  
Vol 39 (2) ◽  
pp. 33-44 ◽  
Author(s):  
W. Lewis Johnson ◽  
James C. Lester

Back in the 1990s we started work on pedagogical agents, a new user interface paradigm for interactive learning environments. Pedagogical agents are autonomous characters that inhabit learning environments and can engage with learners in rich, face-to-face interactions. Building on this work, in 2000 we, together with our colleague, Jeff Rickel, published an article on pedagogical agents that surveyed this new paradigm and discussed its potential. We made the case that pedagogical agents that interact with learners in natural, life-like ways can help learning environments achieve improved learning outcomes. This article has been widely cited, and was a winner of the 2017 IFAAMAS Award for Influential Papers in Autonomous Agents and Multiagent Systems (IFAAMAS, 2017). On the occasion of receiving the IFAAMAS award, and after twenty years of work on pedagogical agents, we decided to take another look at the future of the field. We’ll start by revisiting our predictions for pedagogical agents back in 2000, and examine which of those predictions panned out. Then, informed what we have learned since then, we will take another look at emerging trends and the future of pedagogical agents. Advances in natural language dialogue, affective computing, machine learning, virtual environments, and robotics are making possible even more lifelike and effective pedagogical agents, with potentially profound effects on the way people learn.


Author(s):  
Rosalind W. Picard
Keyword(s):  

2019 ◽  
Vol 111 (8) ◽  
pp. 1382-1395 ◽  
Author(s):  
Wenjing Li ◽  
Fuxing Wang ◽  
Richard E. Mayer ◽  
Huashan Liu

Author(s):  
Mehdi Alaimi ◽  
Edith Law ◽  
Kevin Daniel Pantasdo ◽  
Pierre-Yves Oudeyer ◽  
Hélène Sauzeon

2021 ◽  
Author(s):  
Intissar Khalifa ◽  
Ridha Ejbali ◽  
Raimondo Schettini ◽  
Mourad Zaied

Abstract Affective computing is a key research topic in artificial intelligence which is applied to psychology and machines. It consists of the estimation and measurement of human emotions. A person’s body language is one of the most significant sources of information during job interview, and it reflects a deep psychological state that is often missing from other data sources. In our work, we combine two tasks of pose estimation and emotion classification for emotional body gesture recognition to propose a deep multi-stage architecture that is able to deal with both tasks. Our deep pose decoding method detects and tracks the candidate’s skeleton in a video using a combination of depthwise convolutional network and detection-based method for 2D pose reconstruction. Moreover, we propose a representation technique based on the superposition of skeletons to generate for each video sequence a single image synthesizing the different poses of the subject. We call this image: ‘history pose image’, and it is used as input to the convolutional neural network model based on the Visual Geometry Group architecture. We demonstrate the effectiveness of our method in comparison with other methods in the state of the art on the standard Common Object in Context keypoint dataset and Face and Body gesture video database.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4222
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Masaki Osumi ◽  
Koh Shimokawa

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


Sign in / Sign up

Export Citation Format

Share Document