Industrielle Mensch-Roboter-Interaktion in KMU/Industrial human-robot-collaboration in SMEs – SMEs underestimate the potential of human-robot-collaboration

2020 ◽  
Vol 110 (03) ◽  
pp. 146-150
Author(s):  
Marco Baumgartner ◽  
Tobias Kopp ◽  
Steffen Kinkel

Die industrielle Mensch-Roboter-Interaktion (MRI) eignet sich nach Einschätzung von Experten vor allem für die spezifischen Produktionsbedingungen kleiner und mittlerer Unternehmen (KMU). Nichtsdestotrotz finden sich MRI-Lösungen derzeit vorwiegend in Großunternehmen. Eine empirische Befragung von 81 Vertretern deutscher Industrieunternehmen legt die Vermutung nahe, dass es sich hierbei nicht nur um ein Umsetzungsdefizit handelt. Vielmehr scheinen KMU die Potenziale von MRI-Lösungen systematisch zu unterschätzen.   According to experts, industrial human-robot interaction (HRI) is particularly suitable for the specific production conditions of small and medium-sized enterprises (SMEs). Nevertheless, HRI solutions are currently mainly found in large companies. An empirical survey of 81 representatives of German industrial companies suggests that this is not just due to barriers in implementing collaborative robots. On the contrary, SMEs seem to systematically underestimate the potential of HRI solutions.

Author(s):  
Roberta Etzi ◽  
Siyuan Huang ◽  
Giulia Wally Scurati ◽  
Shilei Lyu ◽  
Francesco Ferrise ◽  
...  

Abstract The use of collaborative robots in the manufacturing industry has widely spread in the last decade. In order to be efficient, the human-robot collaboration needs to be properly designed by also taking into account the operator’s psychophysiological reactions. Virtual Reality can be used as a tool to simulate human-robot collaboration in a safe and cheap way. Here, we present a virtual collaborative platform in which the human operator and a simulated robot coordinate their actions to accomplish a simple assembly task. In this study, the robot moved slowly or more quickly in order to assess the effect of its velocity on the human’s responses. Ten participants tested this application by using an Oculus Rift head-mounted display; ARTracking cameras and a Kinect system were used to track the operator’s right arm movements and hand gestures respectively. Performance, user experience, and physiological responses were recorded. The results showed that while humans’ performances and evaluations varied as a function of the robot’s velocity, no differences were found in the physiological responses. Taken together, these data highlight the relevance of the kinematic aspects of robot’s motion within a human-robot collaboration and provide valuable insights to further develop our virtual human-machine interactive platform.


2019 ◽  
Vol 38 (6) ◽  
pp. 747-765 ◽  
Author(s):  
Federica Ferraguti ◽  
Chiara Talignani Landi ◽  
Lorenzo Sabattini ◽  
Marcello Bonfè ◽  
Cesare Fantuzzi ◽  
...  

Admittance control allows a desired dynamic behavior to be reproduced on a non-backdrivable manipulator and it has been widely used for interaction control and, in particular, for human–robot collaboration. Nevertheless, stability problems arise when the environment (e.g. the human) the robot is interacting with becomes too stiff. In this paper, we investigate the stability issues related to a change of stiffness of the human arm during the interaction with an admittance-controlled robot. We propose a novel method for detecting the rise of instability and a passivity-preserving strategy for restoring a stable behavior. The results of the paper are validated on two robotic setups and with 50 users performing two tasks that emulate industrial operations.


2021 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Zörner ◽  
Emy Arts ◽  
Brenda Vasiljevic ◽  
Ankit Srivastava ◽  
Florian Schmalzl ◽  
...  

As robots become more advanced and capable, developing trust is an important factor of human-robot interaction and cooperation. However, as multiple environmental and social factors can influence trust, it is important to develop more elaborate scenarios and methods to measure human-robot trust. A widely used measurement of trust in social science is the investment game. In this study, we propose a scaled-up, immersive, science fiction Human-Robot Interaction (HRI) scenario for intrinsic motivation on human-robot collaboration, built upon the investment game and aimed at adapting the investment game for human-robot trust. For this purpose, we utilize two Neuro-Inspired COmpanion (NICO) - robots and a projected scenery. We investigate the applicability of our space mission experiment design to measure trust and the impact of non-verbal communication. We observe a correlation of 0.43 (p=0.02) between self-assessed trust and trust measured from the game, and a positive impact of non-verbal communication on trust (p=0.0008) and robot perception for anthropomorphism (p=0.007) and animacy (p=0.00002). We conclude that our scenario is an appropriate method to measure trust in human-robot interaction and also to study how non-verbal communication influences a human’s trust in robots.


Procedia CIRP ◽  
2016 ◽  
Vol 44 ◽  
pp. 275-280 ◽  
Author(s):  
C. Thomas ◽  
L. Stankiewicz ◽  
A. Grötsch ◽  
S. Wischniewski ◽  
J. Deuse ◽  
...  

ACTA IMEKO ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 80
Author(s):  
Castrese Di Marino ◽  
Andrea Rega ◽  
Ferdinando Vitolo ◽  
Stanislao Patalano ◽  
Antonio Lanzotti

<p class="Abstract">This paper deals with collaborative robotics by highlighting the main issues linked to the interaction between humans and robots. A critical study of the standards in force on human–robot interaction and the current principles on workplace design for human–robot collaboration (HRC) are presented. The paper focuses on an anthropocentric paradigm in which the human becomes the core of the workplace in combination with the robot, and it presents a basis for designing workplaces through two key concepts: (i) the introduction of human and robot spaces as elementary spaces and (ii) the dynamic variations of the elementary spaces in shape, size and position. According to this paradigm, the limitations of a safety-based approach, introduced by the standards, are overcome by positioning the human and the robot inside the workplace and managing their interaction through the elementary spaces. The introduced concepts, in combination with the safety prescriptions, have been organised by means of a multi-level graph for driving the HRC design phase. The collaborative workplace is separated into sublevels. The main elements of a collaborative workplace are identified and their relationships presented by means of digraphs.  </p>


2021 ◽  
Vol 24 (4) ◽  
pp. 180-199
Author(s):  
R. R. Galin ◽  
V. V. Serebrennyj ◽  
G. K. Tevyashov ◽  
A. A. Shiroky

Purpose or research is to find solvable tasks for increasing the effectiveness of collaborative interaction between people and robots in ergatic robotic systems, or, in other words, in collaborative robotic systems. Methods. A comprehensive analysis of works published in highly rated peer-reviewed open-access scientific publications was carried out to achieve this goal. Main terms and concepts of collaborative robotics are described in § 1 and their current understanding in the research community is also described. The structure of workspaces in interaction zone of a person and robot is described. The criteria for assigning robot to the class of collaborative ones are also described. The criteria for safe interaction of a person and robot in a single workspace is described in § 2. Various grounds for classifying human-robot interactions in collaborative RTAs are described in § 3. Results. A significant part of published works about collaborative robotics is devoted to the organization of safe man and robot interaction. Less attention is paid to the effectiveness improvement of such interaction. An up-to-date task in the problem of efficiency improvement of collaborative robotic systems is the identification of tasks that have already been solved in other areas - in particular, in the field of organizational systems management. The possibility of using the term "team" for collaborative robots in a collaborative PTC is stated in § 4. A formal problem setting of optimal distribution in teamwork of collaborative robots, similar to the problem of heterogeneous team formation in the theory of organizational systems management is proposed in § 5. Conclusions. Proposed task setting of optimal distribution of works in collaborative robots’ team shows possibility of using results obtained in group of mathematical models of commands formation and functioning for control of collaborative robotic systems in order to increase efficiency of people and robots interaction. It is prospectively to continue the search for adapting models and governance mechanisms to the theory of organizational system management and integrated activities methodology.


Author(s):  
Kimberly A. Pollard ◽  
Stephanie M. Lukin ◽  
Matthew Marge ◽  
Ashley Foots ◽  
Susan G. Hill

Industry, military, and academia are showing increasing interest in collaborative human-robot teaming in a variety of task contexts. Designing effective user interfaces for human-robot interaction is an ongoing challenge, and a variety of single and multiple-modality interfaces have been explored. Our work is to develop a bi-directional natural language interface for remote human-robot collaboration in physically situated tasks. When combined with a visual interface and audio cueing, we intend for the natural language interface to provide a naturalistic user experience that requires little training. Building the language portion of this interface requires first understanding how potential users would speak to the robot. In this paper, we describe our elicitation of minimally-constrained robot-directed language, observations about the users’ language behavior, and future directions for constructing an automated robotic system that can accommodate these language needs.


Author(s):  
Scott A. Green ◽  
Mark Billinghurst ◽  
XiaoQi Chen ◽  
J. Geoffrey Chase

Future space exploration will demand the cultivation of human-robotic systems, however, little attention has been paid to the development of human-robot teams. Current methods for autonomous plan creation are often complex and difficult to use. So a system is needed that enables humans and robotic systems to naturally and effectively collaborate. Effective collaboration takes place when the participants are able to communicate in a natural and effective manner. Grounding, the common understanding between conversational participants, shared spatial referencing and situational awareness, are crucial components of communication and collaboration. This paper briefly reviews the fields of human-robot interaction and Augmented Reality (AR), the overlaying of computer graphics onto the real worldview. The strengths of AR are discussed and how they might be used for more effective human-robot collaboration is described. Then a description of an architecture that we have developed is given that uses AR as a means for real time understanding of the shared spatial scene. This architecture enables grounding and enhances situational awareness, thus laying the necessary groundwork for natural and effective human-robot collaboration.


2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Author(s):  
Junhao Xiao ◽  
Pan Wang ◽  
Huimin Lu ◽  
Hui Zhang

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.


Sign in / Sign up

Export Citation Format

Share Document