Effects of Pedagogical Agents on Students’ Mathematics Performance: A Comparison Between Two Versions

2017 ◽  
Vol 56 (5) ◽  
pp. 701-722 ◽  
Author(s):  
Rex P. Bringula ◽  
Ian Clement O. Fosgate ◽  
Neil Peter R. Garcia ◽  
Josf Luinico M. Yorobe

This experimental study investigated the effects of the use of two versions of a pedagogical agent named personal instructing agent (PIA) on the mathematics performance of students. The first version exhibits synthetic facial expressions while the second version does not exhibit facial expression (i.e., neutral facial expression). Two groups of students with the same levels of prior knowledge in mathematics utilized two different versions of PIA. The first group—the facial group—utilized a PIA that provides textual and facial expressions feedback (happy, sad, surprise, and neutral facial expressions). The second group—the nonfacial group—used the same software except that PIA only exhibited neutral facial expression. The study showed that the mathematics scores of the students in the facial group significantly improved as compared with those who are in the nonfacial group. The posttest scores of the facial group were found significantly higher than those of the nonfacial group. The study showed that PIA that exhibited synthetic facial expressions improved students’ mathematics learning. It is concluded that synthetic facial expressions and textual feedback of pedagogical agent can be utilized to help students learn to solve mathematics problems. Limitations and recommendations are also presented.

Author(s):  
Casey Frechette ◽  
Roxana Moreno

We examined how the presence and nonverbal communication of an animated pedagogical agent affects students’ perceptions and learning. College students learned about astronomy either without an agent’s image or with an agent under one of the following conditions: a static agent (S), an agent with deictic movements (D), an agent with facial expressions (E), or an agent with both deictic movements and facial expressions (DE). Group S outperformed group E on a comprehension test, but no other differences were found on students’ learning or perceptions. The results show that the presence of the studied agent – regardless of nonverbal abilities – did not produce at least a moderate effect size. Further, a static version of the agent was preferable to one with only facial expressions.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2022 ◽  
Vol 29 (2) ◽  
pp. 1-59
Author(s):  
Joni Salminen ◽  
Sercan Şengün ◽  
João M. Santos ◽  
Soon-Gyo Jung ◽  
Bernard Jansen

There has been little research into whether a persona's picture should portray a happy or unhappy individual. We report a user experiment with 235 participants, testing the effects of happy and unhappy image styles on user perceptions, engagement, and personality traits attributed to personas using a mixed-methods analysis. Results indicate that the participant's perceptions of the persona's realism and pain point severity increase with the use of unhappy pictures. In contrast, personas with happy pictures are perceived as more extroverted, agreeable, open, conscientious, and emotionally stable. The participants’ proposed design ideas for the personas scored more lexical empathy scores for happy personas. There were also significant perception changes along with the gender and ethnic lines regarding both empathy and perceptions of pain points. Implications are the facial expression in the persona profile can affect the perceptions of those employing the personas. Therefore, persona designers should align facial expressions with the task for which the personas will be employed. Generally, unhappy images emphasize realism and pain point severity, and happy images invoke positive perceptions.


2021 ◽  
Vol 8 (5) ◽  
pp. 949
Author(s):  
Fitra A. Bachtiar ◽  
Muhammad Wafi

<p><em>Human machine interaction</em>, khususnya pada <em>facial</em> <em>behavior</em> mulai banyak diperhatikan untuk dapat digunakan sebagai salah satu cara untuk personalisasi pengguna. Kombinasi ekstraksi fitur dengan metode klasifikasi dapat digunakan agar sebuah mesin dapat mengenali ekspresi wajah. Akan tetapi belum diketahui basis metode klasifikasi apa yang tepat untuk digunakan. Penelitian ini membandingkan tiga metode klasifikasi untuk melakukan klasifikasi ekspresi wajah. Dataset ekspresi wajah yang digunakan pada penelitian ini adalah JAFFE dataset dengan total 213 citra wajah yang menunjukkan 7 (tujuh) ekspresi wajah. Ekspresi wajah pada dataset tersebut yaitu <em>anger</em>, <em>disgust</em>, <em>fear</em>, <em>happy</em>, <em>neutral</em>, <em>sadness</em>, dan <em>surprised</em>. Facial Landmark digunakan sebagai ekstraksi fitur wajah. Model klasifikasi yang digunakan pada penelitian ini adalah ELM, SVM, dan <em>k</em>-NN. Masing masing model klasifikasi akan dicari nilai parameter terbaik dengan menggunakan 80% dari total data. 5- <em>fold</em> <em>cross-validation</em> digunakan untuk mencari parameter terbaik. Pengujian model dilakukan dengan 20% data dengan metode evaluasi akurasi, F1 Score, dan waktu komputasi. Nilai parameter terbaik pada ELM adalah menggunakan 40 hidden neuron, SVM dengan nilai  = 10<sup>5</sup> dan 200 iterasi, sedangkan untuk <em>k</em>-NN menggunakan 3 <em>k</em> tetangga. Hasil uji menggunakan parameter tersebut menunjukkan ELM merupakan algoritme terbaik diantara ketiga model klasifikasi tersebut. Akurasi dan F1 Score untuk klasifikasi ekspresi wajah untuk ELM mendapatkan nilai akurasi sebesar 0.76 dan F1 Score 0.76, sedangkan untuk waktu komputasi membutuhkan waktu 6.97´10<sup>-3</sup> detik.   </p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract">H<em>uman-machine interaction, especially facial behavior is considered to be use in user personalization. Feature extraction and classification model combinations can be used for a machine to understand the human facial expression. However, which classification base method should be used is not yet known. This study compares three classification methods for facial expression recognition. JAFFE dataset is used in this study with a total of 213 facial images which shows seven facial expressions. The seven facial expressions are anger, disgust, fear, happy, neutral, sadness, dan surprised. Facial Landmark is used as a facial component features. The classification model used in this study is ELM, SVM, and k-NN. The hyperparameter of each model is searched using 80% of the total data. 5-fold cross-validation is used to find the hyperparameter. The testing is done using 20% of the data and evaluated using accuracy, F1 Score, and computation time. The hyperparameter for ELM is 40 hidden neurons, SVM with  = 105 and 200 iteration, while k-NN used 3 k neighbors. The experiment results show that ELM outperforms other classification methods. The accuracy and F1 Score achieved by ELM is 0.76 and 0.76, respectively. Meanwhile, time computation takes 6.97 10<sup>-3</sup> seconds.      </em></p>


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yusra Khalid Bhatti ◽  
Afshan Jamil ◽  
Nudrat Nida ◽  
Muhammad Haroon Yousaf ◽  
Serestina Viriri ◽  
...  

Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253378
Author(s):  
Svenja Zempelin ◽  
Karolina Sejunaite ◽  
Claudia Lanza ◽  
Matthias W. Riepe

Film clips are established to induce or intensify mood states in young persons. Fewer studies address induction of mood states in old persons. Analysis of facial expression provides an opportunity to substantiate subjective mood states with a psychophysiological variable. We investigated healthy young (YA; n = 29; age 24.4 ± 2.3) and old (OA; n = 28; age 69.2 ± 7.4) participants. Subjects were exposed to film segments validated in young adults to induce four basic emotions (anger, disgust, happiness, sadness). We analyzed subjective mood states with a 7-step Likert scale and facial expressions with an automated system for analysis of facial expressions (FaceReader™ 7.0, Noldus Information Technology b.v.) for both the four target emotions as well as concomitant emotions. Mood expressivity was analysed with the Berkeley Expressivity Questionnaire (BEQ) and the Short Suggestibility Scale (SSS). Subjective mood intensified in all target emotions in the whole group and both YA and OA subgroups. Facial expressions of mood intensified in the whole group for all target emotions except sadness. Induction of happiness was associated with a decrease of sadness in both subjective and objective assessment. Induction of sadness was observed with subjective assessment and accompanied by a decrease of happiness in both subjective and objective assessment. Regression analysis demonstrated pre-exposure facial expressions and personality factors (BEQ, SSS) to be associated with the intensity of facial expression on mood induction. We conclude that mood induction is successful regardless of age. Analysis of facial expressions complement self-assessment of mood and may serve as a means of objectification of mood change. The concordance between self-assessment of mood change and facial expression is modulated by personality factors.


Sign in / Sign up

Export Citation Format

Share Document