Spatial Programming for Industrial Robots: Efficient, Effective and User-Optimised through Natural Communication and Augmented Reality

2014 ◽  
Vol 1018 ◽  
pp. 39-46 ◽  
Author(s):  
Jens Lambrecht ◽  
Jörg Krüger

In this paper, we present a robot programming system taking into account natural communication and process integrated simulation as well as a unified robot control layer and an interface towards the Digital Factory for program transmission. We choose an integrative approach including markerless gesture recognition and a mobile Augmented Reality simulation on common handheld devices, e.g. smartphones or tablet-PCs. The user is enabled to draw poses and trajectories into the workspace of the robot supported with simultaneous visual feedback in Augmented Reality. In addition the user can adapt the robot program by gestural manipulation of poses and trajectories. Within a task-oriented implementation of the robot program a pick and place task was implemented through the programming by demonstration principle. With the help of a user study we evaluate programming duration, programming errors and subjective assessment compared with Teach-In and Offline Programming. The analysis of the results shows a significant reduction of programming duration as well as a reduction of programming errors compared with Teach-In. Furthermore, most participants favour the spatial programming system.

2021 ◽  
Vol 11 (13) ◽  
pp. 6047
Author(s):  
Soheil Rezaee ◽  
Abolghasem Sadeghi-Niaraki ◽  
Maryam Shakeri ◽  
Soo-Mi Choi

A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a personalized AR that is based on a tourist system that retrieves the big data according to the users’ demographic contexts in order to enrich the AR data source in tourism. This research is conducted in two main steps. First, the type of the tourist attraction where the users interest is predicted according to the user demographic contexts, which include age, gender, and education level, by using a machine learning method. Second, the correct data for the user are extracted from the big data by considering time, distance, popularity, and the neighborhood of the tourist places, by using the VIKOR and SWAR decision making methods. By about 6%, the results show better performance of the decision tree by predicting the type of tourist attraction, when compared to the SVM method. In addition, the results of the user study of the system show the overall satisfaction of the participants in terms of the ease-of-use, which is about 55%, and in terms of the systems usefulness, about 56%.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


Author(s):  
Francesco Laera ◽  
Vito M. Manghisi ◽  
Alessandro Evangelista ◽  
Mario Massimo Foglia ◽  
Michele Fiorentino
Keyword(s):  

Author(s):  
Vladimir Kuts ◽  
Tauno Otto ◽  
Yevhen Bondarenko ◽  
Fei Yu

Abstract Industrial Digital Twins (DT) is the precise virtual representation of the manufacturing environment and mainly consists of the system-level simulation, which combines both manufacturing processes and parametric models of the product. As being one of the pillars of the Industry 4.0 paradigm, DT-s are widely integrated into the existing factories, enhancing the concept of the virtual factories. View from the research perspective is that experiments on the Internet of Things, data acquisition, cybersecurity, telemetry synchronization with physical factories, etc. are being executed in those virtual simulations. Moreover, new ways of interactions and interface to oversee, interact and learn are being developed via the assistance of Virtual Reality (VR) and Augmented Reality (AR) technologies, which are already widely spread on the consumer market. However, already, VR is being used widely in existing commercial software packages and toolboxes to provide students, teachers, operators, engineers, production managers, and researchers with an immersive way of interacting with the factory while the manufacturing simulation is running. This gives a better understanding and more in-depth knowledge of the actual manufacturing processes, not being directly accessing those. However, the virtual presence mentioned above experience is limited to a single person. It does not enable additional functionalities for the simulations, which can be re-planning or even re-programming of the physical factory in an online connection by using VR or AR interfaces. The main aim of the related research paper is to enhance already existing fully synchronized with physical world DT-s with multi-user experience, enabling factory operators to work with and re-program the real machinery from remote locations in a more intuitive way instead thinking about final aim than about the process itself. Moreover, being developed using real-time platform Unity3D, this multiplayer solution gives opportunities for training and educational purposes and is connecting people from remote locations of the world. Use-cases exploits industrial robots placed in the Industrial Virtual and Augmented Reality Laboratory environment of Tallinn University of Technology and a mobile robot solution developed based on a collaboration between the University of Southern Denmark and a Danish company. Experiments are being performed on the connection between Estonia and Denmark while performing reprogramming tasks of the physical heavy industrial robots. Furthermore, the mobile robot solution is demonstrated in a virtual warehouse environment. Developed methods and environments together with the collected data will enable us to widen the use-cases with non-manufacturing scenarios, i.e., smart city and smart healthcare domains, for the creation of a set of new interfaces and multiplayer experiences.


Author(s):  
Prabha Selvaraj ◽  
Sumathi Doraikannan ◽  
Anantha Raman Rathinam ◽  
Balachandrudu K. E.

Today technology evolves in two different directions. The first one is to create a new technology for our requirement and solve the problem, and the second one is to do it with the existing technology. This chapter will discuss in detail augmented reality and its use in the real world and also its application domains like medicine, education, health, gaming, tourism, film and entertainment, architecture, and development. Many think that AR is only for smartphones, but there are different ways to enhance the insight of the world. Augmented realities can be presented on an extensive range of displays, monitors, screens, handheld devices, or glasses. This chapter will provide the information about the key components of AR devices. This chapter gives a view on different types of AR and also projects how the technology can be adapted for multiple purposes based on the required type of view.


Author(s):  
Hosei Matsuoka

This chapter presents a method of aerial acoustic communication in which data is modulated using OFDM (Orthogonal Frequency Division Multiplexing) and embedded in regular audio material without significantly degrading the quality of the original sound. It can provide data transmission of several hundred bps, which is much higher than is possible with other audio data hiding techniques. The proposed method replaces the high frequency band of the audio signal with OFDM carriers, each of which is power-controlled according to the spectrum envelope of the original audio signal. The implemented system enables the transmission of short text messages from loudspeakers to mobile handheld devices at a distance of around 3m. This chapter also provides the subjective assessment results of audio clips embedded with OFDM signals.


2019 ◽  
Vol 9 (23) ◽  
pp. 5123 ◽  
Author(s):  
Diego Vaquero-Melchor ◽  
Ana M. Bernardos

Nowadays, Augmented-Reality (AR) head-mounted displays (HMD) deliver a more immersive visualization of virtual contents, but the available means of interaction, mainly based on gesture and/or voice, are yet limited and obviously lack realism and expressivity when compared to traditional physical means. In this sense, the integration of haptics within AR may help to deliver an enriched experience, while facilitating the performance of specific actions, such as repositioning or resizing tasks, that are still dependent on the user’s skills. In this direction, this paper gathers the description of a flexible architecture designed to deploy haptically enabled AR applications both for mobile and wearable visualization devices. The haptic feedback may be generated through a variety of devices (e.g., wearable, graspable, or mid-air ones), and the architecture facilitates handling the specificity of each. For this reason, within the paper, it is discussed how to generate a haptic representation of a 3D digital object depending on the application and the target device. Additionally, the paper includes an analysis of practical, relevant issues that arise when setting up a system to work with specific devices like HMD (e.g., HoloLens) and mid-air haptic devices (e.g., Ultrahaptics), such as the alignment between the real world and the virtual one. The architecture applicability is demonstrated through the implementation of two applications: (a) Form Inspector and (b) Simon Game, built for HoloLens and iOS mobile phones for visualization and for UHK for mid-air haptics delivery. These applications have been used to explore with nine users the efficiency, meaningfulness, and usefulness of mid-air haptics for form perception, object resizing, and push interaction tasks. Results show that, although mobile interaction is preferred when this option is available, haptics turn out to be more meaningful in identifying shapes when compared to what users initially expect and in contributing to the execution of resizing tasks. Moreover, this preliminary user study reveals some design issues when working with haptic AR. For example, users may be expecting a tailored interface metaphor, not necessarily inspired in natural interaction. This has been the case of our proposal of virtual pressable buttons, built mimicking real buttons by using haptics, but differently interpreted by the study participants.


Sign in / Sign up

Export Citation Format

Share Document