Deep Learning and Sentiment Analysis for Human-Robot Interaction

Author(s):  
Mattia Atzeni ◽  
Diego Reforgiato Recupero
Author(s):  
Soo-Han Kang ◽  
Ji-Hyeong Han

AbstractRobot vision provides the most important information to robots so that they can read the context and interact with human partners successfully. Moreover, to allow humans recognize the robot’s visual understanding during human-robot interaction (HRI), the best way is for the robot to provide an explanation of its understanding in natural language. In this paper, we propose a new approach by which to interpret robot vision from an egocentric standpoint and generate descriptions to explain egocentric videos particularly for HRI. Because robot vision equals to egocentric video on the robot’s side, it contains as much egocentric view information as exocentric view information. Thus, we propose a new dataset, referred to as the global, action, and interaction (GAI) dataset, which consists of egocentric video clips and GAI descriptions in natural language to represent both egocentric and exocentric information. The encoder-decoder based deep learning model is trained based on the GAI dataset and its performance on description generation assessments is evaluated. We also conduct experiments in actual environments to verify whether the GAI dataset and the trained deep learning model can improve a robot vision system


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1761
Author(s):  
Martina Szabóová ◽  
Martin Sarnovský ◽  
Viera Maslej Krešňáková ◽  
Kristína Machová

This paper connects two large research areas, namely sentiment analysis and human–robot interaction. Emotion analysis, as a subfield of sentiment analysis, explores text data and, based on the characteristics of the text and generally known emotional models, evaluates what emotion is presented in it. The analysis of emotions in the human–robot interaction aims to evaluate the emotional state of the human being and on this basis to decide how the robot should adapt its behavior to the human being. There are several approaches and algorithms to detect emotions in the text data. We decided to apply a combined method of dictionary approach with machine learning algorithms. As a result of the ambiguity and subjectivity of labeling emotions, it was possible to assign more than one emotion to a sentence; thus, we were dealing with a multi-label problem. Based on the overview of the problem, we performed experiments with the Naive Bayes, Support Vector Machine and Neural Network classifiers. Results obtained from classification were subsequently used in human–robot experiments. Despise the lower accuracy of emotion classification, we proved the importance of expressing emotion gestures based on the words we speak.


2019 ◽  
Vol 14 (1) ◽  
pp. 22-30
Author(s):  
Dongkeon Park ◽  
◽  
Kyeong-Min Kang ◽  
Jin-Woo Bae ◽  
Ji-Hyeong Han

2020 ◽  
Vol 53 (5) ◽  
pp. 750-755
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

Sign in / Sign up

Export Citation Format

Share Document