Autonomous learning of foreign language based on facial emotion recognition and cloud computing

2020 ◽  
pp. 1-12
Author(s):  
Yan Gong ◽  
Sha Rina

Due to the limitations of the learning environment and unguided guidance, students’ autonomous learning of foreign languages after class is not effective. In order to improve the efficiency of autonomous learning of foreign languages, this paper builds a foreign language self-learning system based on facial emotion recognition algorithm and cloud computing platform. Moreover, this paper uses emotion recognition algorithms to identify students’ status and guide them to improve students’ autonomous learning and improve the system’s operating efficiency through cloud computing platforms. In addition, this article combines the needs of autonomous learning to perform facial emotion matching and builds the corresponding functional modules of the system according to the requirements of autonomous learning and designs a sophisticated three-level network structure to achieve a balance between detection performance and real-time performance. In order to verify the performance of the system, an experiment was carried out through the accuracy rate of student’s autonomous state emotion recognition and the English improvement of students’ autonomous learning. The research results show that the foreign language autonomous learning system constructed in this paper has good performance.

Author(s):  
Zhengxu Lian ◽  
Yingjie Guo ◽  
Xinyu Cao ◽  
Wendi Li

A wearable device system was proposed in the present work to address the problem of facial emotion recognition disorders. The proposed system could comprehensively analyze the user’s own stress status, emotions of people around, and the surrounding environment. The system consists of a multi-dimensional physiological signals acquisition module, an image acquisition and transmission module, a user interface of the user mobile terminal, and a cloud database for data storage. Moreover, a deep learning based multi-model physiological signal pressure recognition algorithm and a facial emotion recognition algorithm were designed and implemented in the system. Some publicly available data sets were used to test the two algorithms, and the experiment results showed that the two algorithms could well realize the expected functions of the system.


2021 ◽  
Vol 14 (1) ◽  
pp. 5
Author(s):  
Peter A. Gloor ◽  
Andrea Fronzetti Colladon ◽  
Erkin Altuntas ◽  
Cengiz Cetinkaya ◽  
Maximilian F. Kaiser ◽  
...  

Can we really “read the mind in the eyes”? Moreover, can AI assist us in this task? This paper answers these two questions by introducing a machine learning system that predicts personality characteristics of individuals on the basis of their face. It does so by tracking the emotional response of the individual’s face through facial emotion recognition (FER) while watching a series of 15 short videos of different genres. To calibrate the system, we invited 85 people to watch the videos, while their emotional responses were analyzed through their facial expression. At the same time, these individuals also took four well-validated surveys of personality characteristics and moral values: the revised NEO FFI personality inventory, the Haidt moral foundations test, the Schwartz personal value system, and the domain-specific risk-taking scale (DOSPERT). We found that personality characteristics and moral values of an individual can be predicted through their emotional response to the videos as shown in their face, with an accuracy of up to 86% using gradient-boosted trees. We also found that different personality characteristics are better predicted by different videos, in other words, there is no single video that will provide accurate predictions for all personality characteristics, but it is the response to the mix of different videos that allows for accurate prediction.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 228
Author(s):  
Yuanchang Zhong ◽  
Lili Sun ◽  
Chenhao Ge ◽  
Huilian Fan

As we all know, there are many ways to express emotions. Among them, facial emotion recognition, which is widely used in human–computer interaction, psychoanalysis of mental patients, multimedia retrieval, and other fields, is still a challenging task. At present, although convolutional neural network has achieved great success in face emotion recognition algorithms, it has a rising space in effective feature extraction and recognition accuracy. According to a large number of literature studies, histogram of oriented gradient (HOG) can effectively extract face features, and ensemble methods can effectively improve the accuracy and robustness of the algorithm. Therefore, this paper proposes a new algorithm, HOG-ESRs, which improves the traditional ensemble methods to the ensembles with shared representations (ESRs) method, effectively reducing the residual generalization error, and then combining HOG features with ESRs. The experimental results on the FER2013 dataset show that the new algorithm can not only effectively extract features and reduce the residual generalization error, but also improve the accuracy and robustness of the algorithm, the purpose of the study being achieved. The application of HOG-ESRs in facial emotion recognition is helpful to solve the symmetry of edge detection and the deficiency of related methods in an outdoor lighting environment.


2017 ◽  
Author(s):  
Olga Krestinskaya ◽  
Alex Pappachen James

Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods.


2017 ◽  
Author(s):  
Olga Krestinskaya ◽  
Alex Pappachen James

Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods.


2013 ◽  
Vol 61 (1) ◽  
pp. 7-15 ◽  
Author(s):  
Daniel Dittrich ◽  
Gregor Domes ◽  
Susi Loebel ◽  
Christoph Berger ◽  
Carsten Spitzer ◽  
...  

Die vorliegende Studie untersucht die Hypothese eines mit Alexithymie assoziierten Defizits beim Erkennen emotionaler Gesichtsaudrücke an einer klinischen Population. Darüber hinaus werden Hypothesen zur Bedeutung spezifischer Emotionsqualitäten sowie zu Gender-Unterschieden getestet. 68 ambulante und stationäre psychiatrische Patienten (44 Frauen und 24 Männer) wurden mit der Toronto-Alexithymie-Skala (TAS-20), der Montgomery-Åsberg Depression Scale (MADRS), der Symptom-Check-List (SCL-90-R) und der Emotional Expression Multimorph Task (EEMT) untersucht. Als Stimuli des Gesichtererkennungsparadigmas dienten Gesichtsausdrücke von Basisemotionen nach Ekman und Friesen, die zu Sequenzen mit sich graduell steigernder Ausdrucksstärke angeordnet waren. Mittels multipler Regressionsanalyse untersuchten wir die Assoziation von TAS-20 Punktzahl und facial emotion recognition (FER). Während sich für die Gesamtstichprobe und den männlichen Stichprobenteil kein signifikanter Zusammenhang zwischen TAS-20-Punktzahl und FER zeigte, sahen wir im weiblichen Stichprobenteil durch die TAS-20 Punktzahl eine signifikante Prädiktion der Gesamtfehlerzahl (β = .38, t = 2.055, p < 0.05) und den Fehlern im Erkennen der Emotionen Wut und Ekel (Wut: β = .40, t = 2.240, p < 0.05, Ekel: β = .41, t = 2.214, p < 0.05). Für wütende Gesichter betrug die Varianzaufklärung durch die TAS-20-Punktzahl 13.3 %, für angeekelte Gesichter 19.7 %. Kein Zusammenhang bestand zwischen der Zeit, nach der die Probanden die emotionalen Sequenzen stoppten, um ihre Bewertung abzugeben (Antwortlatenz) und Alexithymie. Die Ergebnisse der Arbeit unterstützen das Vorliegen eines mit Alexithymie assoziierten Defizits im Erkennen emotionaler Gesichtsausdrücke bei weiblchen Probanden in einer heterogenen, klinischen Stichprobe. Dieses Defizit könnte die Schwierigkeiten Hochalexithymer im Bereich sozialer Interaktionen zumindest teilweise begründen und so eine Prädisposition für psychische sowie psychosomatische Erkrankungen erklären.


2017 ◽  
Vol 32 (8) ◽  
pp. 698-709 ◽  
Author(s):  
Ryan Sutcliffe ◽  
Peter G. Rendell ◽  
Julie D. Henry ◽  
Phoebe E. Bailey ◽  
Ted Ruffman

2020 ◽  
Vol 35 (2) ◽  
pp. 295-315 ◽  
Author(s):  
Grace S. Hayes ◽  
Skye N. McLennan ◽  
Julie D. Henry ◽  
Louise H. Phillips ◽  
Gill Terrett ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document