Object Learning with Natural Language in a Distributed Intelligent System: A Case Study of Human-Robot Interaction

Author(s):  
Stefan Heinrich ◽  
Pascal Folleher ◽  
Peer Springstübe ◽  
Erik Strahl ◽  
Johannes Twiefel ◽  
...  
2020 ◽  
Vol 10 (17) ◽  
pp. 5757
Author(s):  
Elena Laudante ◽  
Alessandro Greco ◽  
Mario Caterino ◽  
Marcello Fera

In current industrial systems, automation is a very important aspect for assessing manufacturing production performance related to working times, accuracy of operations and quality. In particular, the introduction of a robotic system in the working area should guarantee some improvements, such as risks reduction for human operators, better quality results and a speed increase for production processes. In this context, human action remains still necessary to carry out part of the subtasks, as in the case of composites assembly processes. This study aims at presenting a case study regarding the reorganization of the working activity carried out in workstation in which a composite fuselage panel is assembled in order to demonstrate, by means of simulation tool, that some of the advantages previously listed can be achieved also in aerospace industry. In particular, an entire working process for composite fuselage panel assembling will be simulated and analyzed in order to demonstrate and verify the applicability and effectiveness of human–robot interaction (HRI), focusing on working times and ergonomics and respecting the constraints imposed by standards ISO 10218 and ISO TS 15066. Results show the effectiveness of HRI both in terms of assembly performance, by reducing working times and ergonomics—for which the simulation provides a very low risk index.


Author(s):  
Laura Fiorini ◽  
Raffaele Limosani ◽  
Raffaele Esposito ◽  
Alessandro Manzi ◽  
Alessandra Moschetti ◽  
...  

Author(s):  
Soo-Han Kang ◽  
Ji-Hyeong Han

AbstractRobot vision provides the most important information to robots so that they can read the context and interact with human partners successfully. Moreover, to allow humans recognize the robot’s visual understanding during human-robot interaction (HRI), the best way is for the robot to provide an explanation of its understanding in natural language. In this paper, we propose a new approach by which to interpret robot vision from an egocentric standpoint and generate descriptions to explain egocentric videos particularly for HRI. Because robot vision equals to egocentric video on the robot’s side, it contains as much egocentric view information as exocentric view information. Thus, we propose a new dataset, referred to as the global, action, and interaction (GAI) dataset, which consists of egocentric video clips and GAI descriptions in natural language to represent both egocentric and exocentric information. The encoder-decoder based deep learning model is trained based on the GAI dataset and its performance on description generation assessments is evaluated. We also conduct experiments in actual environments to verify whether the GAI dataset and the trained deep learning model can improve a robot vision system


2013 ◽  
pp. 257-280
Author(s):  
Wenjie Yan ◽  
Elena Torta ◽  
David van der Pol ◽  
Nils Meins ◽  
Cornelius Weber ◽  
...  

This chapter presents an overview of a typical scenario of Ambient Assisted Living (AAL) in which a robot navigates to a person for conveying information. Indoor robot navigation is a challenging task due to the complexity of real-home environments and the need of online learning abilities to adjust for dynamic conditions. A comparison between systems with different sensor typologies shows that vision-based systems promise to provide good performance and a wide scope of usage at reasonable cost. Moreover, vision-based systems can perform different tasks simultaneously by applying different algorithms to the input data stream thus enhancing the flexibility of the system. The authors introduce the state of the art of several computer vision methods for realizing indoor robotic navigation to a person and human-robot interaction. A case study has been conducted in which a robot, which is part of an AAL system, navigates to a person and interacts with her. The authors evaluate this test case and give an outlook on the potential of learning robot vision in ambient homes.


Sign in / Sign up

Export Citation Format

Share Document