Affective Tutoring System for Better Learning

Author(s):  
Abdolhossein Sarrafzadeh ◽  
Samuel T.V. Alexander ◽  
Jamshid Shanbehzadeh

Intelligent tutoring systems (ITS) are still not as effective as one-on-one human tutoring. The next generation of intelligent tutors are expected to be able to take into account the emotional state of students. This paper presents research on the development of an Affective Tutoring System (ATS). The system called “Easy with Eve” adapts to students via a lifelike animated agent who is able to detect student emotion through facial expression analysis, and can display emotion herself. Eve’s adaptations are guided by a case-based method for adapting to student states; this method uses data that was generated by an observational study of human tutors. This paper presents an analysis of facial expressions of students engaged in learning with human tutors and how a facial expression recognition system, a life like agent and a case based system based on this analysis have been integrated to develop an ATS for mathematics.

Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


2008 ◽  
Vol 381-382 ◽  
pp. 375-378
Author(s):  
K.T. Song ◽  
M.J. Han ◽  
F.Y. Chang ◽  
S.H. Chang

The capability of recognizing human facial expression plays an important role in advanced human-robot interaction development. Through recognizing facial expressions, a robot can interact with a user in a more natural and friendly manner. In this paper, we proposed a facial expression recognition system based on an embedded image processing platform to classify different facial expressions on-line in real time. A low-cost embedded vision system has been designed and realized for robotic applications using a CMOS image sensor and digital signal processor (DSP). The current design acquires thirty 640x480 image frames per second (30 fps). The proposed emotion recognition algorithm has been successfully implemented on the real-time vision system. Experimental results on a pet robot show that the robot can interact with a person in a responding manner. The developed image processing platform is effective for accelerating the recognition speed to 25 recognitions per second with an average on-line recognition rate of 74.4% for five facial expressions.


Author(s):  
M. Sultan Zia ◽  
Majid Hussain ◽  
M. Arfan Jaffar

Facial expressions recognition is a crucial task in pattern recognition and it becomes even crucial when cross-cultural emotions are encountered. Various studies in the past have shown that all the facial expressions are not innate and universal, but many of them are learned and culture-dependent. Extreme facial expression recognition methods employ different datasets for training and later use it for testing and demostrate high accuracy in recognition. Their performances degrade drastically when expression images are taken from different cultures. Moreover, there are many existing facial expression patterns which cannot be generated and used as training data in single training session. A facial expression recognition system can maintain its high accuracy and robustness globally and for a longer period if the system possesses the ability to learn incrementally. We also propose a novel classification algorithm for multinomial classification problems. It is an efficient classifier and can be a good choice for base classifier in real-time applications. We propose a facial expression recognition system that can learn incrementally. We use Local Binary Pattern (LBP) features to represent the expression space. The performance of the system is tested on static images from six different databases containing expressions from various cultures. The experiments using the incremental learning classification demonstrate promising results.


Human feelings are mental conditions of sentiments that emerge immediately as opposed to cognitive exertion. Some of the basic feelings are happy, angry, neutral, sad and surprise. These internal feelings of a person are reflected on the face as Facial Expressions. This paper presents a novel methodology for Facial Expression Analysis which will aid to develop a facial expression recognition system. This system can be used in real time to classify five basic emotions. The recognition of facial expressions is important because of its applications in many domains such as artificial intelligence, security and robotics. Many different approaches can be used to overcome the problems of Facial Expression Recognition (FER) but the best suited technique for automated FER is Convolutional Neural Networks(CNN). Thus, a novel CNN architecture is proposed and a combination of multiple datasets such as FER2013, FER+, JAFFE and CK+ is used for training and testing. This helps to improve the accuracy and develop a robust real time system. The proposed methodology confers quite good results and the obtained accuracy may give encouragement and offer support to researchers to build better models for Automated Facial Expression Recognition systems.


Author(s):  
Mahima Agrawal ◽  
Shubangi. D. Giripunje ◽  
P. R. Bajaj

This paper presents an efficient method of recognition of facial expressions in a video. The works proposes highly efficient facial expression recognition system using PCA optimized by Genetic Algorithm .Reduced computational time and comparable efficiency in terms of its ability to recognize correctly are the benchmarks of this work. Video sequences contain more information than still images hence are in the research subject now-a-days and have much more activities during the expression actions. We use PCA, a statistical method to reduce the dimensionality and are used to extract features with the help of covariance analysis to generate Eigen –components of the images. The Eigen-components as a feature input is optimized by Genetic algorithm to reduce the computation cost.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


Author(s):  
Priya Saha ◽  
Debotosh Bhattacharjee ◽  
Barin Kumar De ◽  
Mita Nasipuri

There are many research works in visible as well as thermal facial expression analysis and recognition. Several facial expression databases have been designed in both modalities. However, little attention has been given for analyzing blended facial expressions in the thermal infrared spectrum. In this paper, we have introduced a Visual-Thermal Blended Facial Expression Database (VTBE) that contains visual and thermal face images with both basic and blended facial expressions. The database contains 12 posed blended facial expressions and spontaneous six basic facial expressions in both modalities. In this paper, we have proposed Deformed Thermal Facial Area (DTFA) in thermal expressive face image and make an analysis to differentiate between basic and blended expressions using DTFA. Here, the fusion of DTFA and Deformed Visual Facial Area (DVFA) has been proposed combining the features of both modalities and experiments and has been conducted on this new database. However, to show the effectiveness of our proposed approach, we have compared our method with state-of-the-art methods using USTC-NVIE database. Experiment results reveal that our approach is superior to state-of-the-art methods.


Author(s):  
Shubhrata Gupta ◽  
Keshri Verma ◽  
Nazil Perveen

Facial expression is one of the most powerful, natural, and abrupt means for human beings which have the knack to communicate emotion and regulate inter-personal behaviour. In this paper we present a novel approach for facial expression detection using decision tree. Facial expression information is mostly concentrate on facial expression information regions, so the mouth, eye and eyebrow regions are segmented from the facial expression images firstly. Using these templates we calculate 30 facial characteristics points (FCP’s). These facial characteristic points describe the position and shape of the above three organs to find diverse parameters which are input to the decision tree for recognizing different facial expressions.


Sign in / Sign up

Export Citation Format

Share Document