Emotion-Age-Gender-Nationality Based Intention Understanding in Human–Robot Interaction Using Two-Layer Fuzzy Support Vector Regression

2015 ◽  
Vol 7 (5) ◽  
pp. 709-729 ◽  
Author(s):  
Lue-Feng Chen ◽  
Zhen-Tao Liu ◽  
Min Wu ◽  
Min Ding ◽  
Fang-Yan Dong ◽  
...  
2019 ◽  
Vol 30 (1) ◽  
pp. 7-8
Author(s):  
Dora Maria Ballesteros

Artificial intelligence (AI) is an interdisciplinary subject in science and engineering that makes it possible for machines to learn from data. Artificial Intelligence applications include prediction, recommendation, classification and recognition, object detection, natural language processing, autonomous systems, among others. The topics of the articles in this special issue include deep learning applied to medicine [1, 3], support vector machine applied to ecosystems [2], human-robot interaction [4], clustering in the identification of anomalous patterns in communication networks [5], expert systems for the simulation of natural disaster scenarios [6], real-time algorithms of artificial intelligence [7] and big data analytics for natural disasters [8].


Author(s):  
Zhen-Tao Liu ◽  
Si-Han Li ◽  
Wei-Hua Cao ◽  
Dan-Yun Li ◽  
Man Hao ◽  
...  

The efficiency of facial expression recognition (FER) is important for human-robot interaction. Detection of the facial region, extraction of discriminative facial expression features, and identification of categories of facial expressions are all related to the recognition accuracy and time-efficiency. An FER framework is proposed, in which 2D Gabor and local binary pattern (LBP) are combined to extract discriminative features of salient facial expression patches, and extreme learning machine (ELM) is adopted to identify facial expression categories. The combination of 2D Gabor and LBP can not only describe multiscale and multidirectional textural features, but also capture small local details. The FER of ELM and support vector machine (SVM) is performed using the Japanese female facial expression database and extended Cohn-Kanade database, respectively, in which both ELM and SVM achieve an accuracy of more than 85%, and the computational efficiency of ELM is higher than that of SVM. The proposed framework has been used in the multimodal emotional communication based humans-robots interaction system, in which FER within 2 seconds enables real-time human-robot interaction.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1761
Author(s):  
Martina Szabóová ◽  
Martin Sarnovský ◽  
Viera Maslej Krešňáková ◽  
Kristína Machová

This paper connects two large research areas, namely sentiment analysis and human–robot interaction. Emotion analysis, as a subfield of sentiment analysis, explores text data and, based on the characteristics of the text and generally known emotional models, evaluates what emotion is presented in it. The analysis of emotions in the human–robot interaction aims to evaluate the emotional state of the human being and on this basis to decide how the robot should adapt its behavior to the human being. There are several approaches and algorithms to detect emotions in the text data. We decided to apply a combined method of dictionary approach with machine learning algorithms. As a result of the ambiguity and subjectivity of labeling emotions, it was possible to assign more than one emotion to a sentence; thus, we were dealing with a multi-label problem. Based on the overview of the problem, we performed experiments with the Naive Bayes, Support Vector Machine and Neural Network classifiers. Results obtained from classification were subsequently used in human–robot experiments. Despise the lower accuracy of emotion classification, we proved the importance of expressing emotion gestures based on the words we speak.


2021 ◽  
Vol 11 (16) ◽  
pp. 7426
Author(s):  
Furong Deng ◽  
Yu Zhou ◽  
Sifan Song ◽  
Zijian Jiang ◽  
Lifu Chen ◽  
...  

Gaze-following is an effective way for intention understanding in human–robot interaction, which aims to follow the gaze of humans to estimate what object is being observed. Most of the existing methods require people and objects to appear in the same image. Due to the limitation in the view of the camera, these methods are not applicable in practice. To address this problem, we propose a method of gaze following that utilizes a geometric map for better estimation. With the help of the map, this method is competitive for cross-frame estimation. On the basis of this method, we propose a novel gaze-based image caption system, which has been studied for the first time. Our experiments demonstrate that the system follows the gaze and describes objects accurately. We believe that this system is competent for autistic children’s rehabilitation training, pension service robots, and other applications.


Sign in / Sign up

Export Citation Format

Share Document