Digitale Assistenz in der Additiven Fertigung

2021 ◽  
Vol 116 (10) ◽  
pp. 701-706
Author(s):  
Maximilian Vogt ◽  
Mauritz Möller ◽  
Claus Emmelmann

Abstract Augmented-Reality-basierte digitale Assistenzsysteme werden bereits erfolgreich im industriellen Kontext eingesetzt, um die Arbeitskräfte bei manuellen Tätigkeiten zu unterstützen. Die Nutzung in der additiven Fertigung wurde allerdings noch nicht umfassend untersucht. Angelehnt an das klassische Technologiemanagement wurde ein methodisches Vorgehen entwickelt, um anwendungsspezifische Use Cases für die Additive Fertigung zu adaptieren.

2015 ◽  
Vol 1 (1) ◽  
pp. 534-537 ◽  
Author(s):  
T. Mentler ◽  
C. Wolters ◽  
M. Herczeg

AbstractIn the healthcare domain, head-mounted displays (HMDs) with augmented reality (AR) modalities have been reconsidered for application as a result of commercially available products and the needs for using computers in mobile context. Within a user-centered design approach, interviews were conducted with physicians, nursing staff and members of emergency medical services. Additionally practitioners were involved in evaluating two different head-mounted displays. Based on these measures, use cases and usability considerations according to interaction design and information visualization were derived and are described in this contribution.


Author(s):  
Vladimir Kuts ◽  
Tauno Otto ◽  
Yevhen Bondarenko ◽  
Fei Yu

Abstract Industrial Digital Twins (DT) is the precise virtual representation of the manufacturing environment and mainly consists of the system-level simulation, which combines both manufacturing processes and parametric models of the product. As being one of the pillars of the Industry 4.0 paradigm, DT-s are widely integrated into the existing factories, enhancing the concept of the virtual factories. View from the research perspective is that experiments on the Internet of Things, data acquisition, cybersecurity, telemetry synchronization with physical factories, etc. are being executed in those virtual simulations. Moreover, new ways of interactions and interface to oversee, interact and learn are being developed via the assistance of Virtual Reality (VR) and Augmented Reality (AR) technologies, which are already widely spread on the consumer market. However, already, VR is being used widely in existing commercial software packages and toolboxes to provide students, teachers, operators, engineers, production managers, and researchers with an immersive way of interacting with the factory while the manufacturing simulation is running. This gives a better understanding and more in-depth knowledge of the actual manufacturing processes, not being directly accessing those. However, the virtual presence mentioned above experience is limited to a single person. It does not enable additional functionalities for the simulations, which can be re-planning or even re-programming of the physical factory in an online connection by using VR or AR interfaces. The main aim of the related research paper is to enhance already existing fully synchronized with physical world DT-s with multi-user experience, enabling factory operators to work with and re-program the real machinery from remote locations in a more intuitive way instead thinking about final aim than about the process itself. Moreover, being developed using real-time platform Unity3D, this multiplayer solution gives opportunities for training and educational purposes and is connecting people from remote locations of the world. Use-cases exploits industrial robots placed in the Industrial Virtual and Augmented Reality Laboratory environment of Tallinn University of Technology and a mobile robot solution developed based on a collaboration between the University of Southern Denmark and a Danish company. Experiments are being performed on the connection between Estonia and Denmark while performing reprogramming tasks of the physical heavy industrial robots. Furthermore, the mobile robot solution is demonstrated in a virtual warehouse environment. Developed methods and environments together with the collected data will enable us to widen the use-cases with non-manufacturing scenarios, i.e., smart city and smart healthcare domains, for the creation of a set of new interfaces and multiplayer experiences.


2011 ◽  
pp. 409-431 ◽  
Author(s):  
Tia Jackson ◽  
Frank Angermann ◽  
Peter Meier

Author(s):  
Rafael Radkowski ◽  
Jarid Ingebrand

This paper examines the fidelity of a commodity range camera for assembly inspection in use cases such as augmented reality-based assembly assistance. The objective of inspection is to determine whether a part is present and correctly aligned. In our scenario, shortly after the mechanics assembled the part, which is denoted as on-the-fly inspection. Our approach is based on object tracking and a subsequent discrepancy analysis. Object tracking determines the presence, position, and orientation of parts. The discrepancy analysis facilitates to determine whether the parts are correctly aligned. In comparison to a naive position and orientation difference approach, the discrepancy analysis incorporates the dimensions of parts, which increases the alignment fidelity. To assess this, an experiment was conducted in order to determine the accuracy range. The results indicate a sufficient accuracy for larger parts a noticeable improvement in comparison to the naive approach.


2021 ◽  
Author(s):  
David Harborth ◽  
Katharina Kümpers

AbstractNowadays, digitalization has an immense impact on the landscape of jobs. This technological revolution creates new industries and professions, promises greater efficiency and improves the quality of working life. However, emerging technologies such as robotics and artificial intelligence (AI) are reducing human intervention, thus advancing automation and eliminating thousands of jobs and whole occupational images. To prepare employees for the changing demands of work, adequate and timely training of the workforce and real-time support of workers in new positions is necessary. Therefore, it is investigated whether user-oriented technologies, such as augmented reality (AR) and virtual reality (VR) can be applied “on-the-job” for such training and support—also known as intelligence augmentation (IA). To address this problem, this work synthesizes results of a systematic literature review as well as a practically oriented search on augmented reality and virtual reality use cases within the IA context. A total of 150 papers and use cases are analyzed to identify suitable areas of application in which it is possible to enhance employees' capabilities. The results of both, theoretical and practical work, show that VR is primarily used to train employees without prior knowledge, whereas AR is used to expand the scope of competence of individuals in their field of expertise while on the job. Based on these results, a framework is derived which provides practitioners with guidelines as to how AR or VR can support workers at their job so that they can keep up with anticipated skill demands. Furthermore, it shows for which application areas AR or VR can provide workers with sufficient training to learn new job tasks. By that, this research provides practical recommendations in order to accompany the imminent distortions caused by AI and similar technologies and to alleviate associated negative effects on the German labor market.


2017 ◽  
Vol 107 (03) ◽  
pp. 108-112
Author(s):  
M. Schneider ◽  
D. Prof. Stricker

Für Augmented Reality (AR) existieren zahlreiche Anwendungsfälle in der Industrie, doch trotz der immensen Möglichkeiten konnte sich die Technologie bislang nicht in Produkten manifestieren. Vielmehr findet sie sich lediglich als Blickfang auf Messen oder Marketingveranstaltungen wieder. Um dies zu ändern, versucht die vorliegende Forschungsarbeit durch die Einführung einer neuen Architektur zur webbasierten Auslagerung von AR-Berechnungen zwei der dafür verantwortlichen Hindernisse aus dem Weg zu räumen.   Although a lot of use cases for Augmented Reality (AR) exist in the industrial sector, so far it has not been possible for this technology to manifest itself in products. It solely serves as an eyecatcher at fairs and other marketing events. To change this situation, the research work at hand tries to eliminate two of the relevant obstacles by introducing a new architecture for web-based offloading of AR computations.


Procedia CIRP ◽  
2020 ◽  
Vol 91 ◽  
pp. 93-100
Author(s):  
Daniel Röltgen ◽  
Roman Dumitrescu
Keyword(s):  

2019 ◽  
Vol 3 (1) ◽  
pp. 19 ◽  
Author(s):  
Sylvia Rothe ◽  
Daniel Buschek ◽  
Heinrich Hußmann

In Cinematic Virtual Reality (CVR), the viewer of an omnidirectional movie can freely choose the viewing direction when watching a movie. Therefore, traditional techniques in filmmaking for guiding the viewers’ attention cannot be adapted directly to CVR. Practices such as panning or changing the frame are no longer defined by the filmmaker; rather it is the viewer who decides where to look. In some stories, it is necessary to show certain details to the viewer, which should not be missed. At the same time, the freedom of the viewer to look around in the scene should not be destroyed. Therefore, techniques are needed which guide the attention of the spectator to visual information in the scene. Attention guiding also has the potential to improve the general viewing experience, since viewers will be less afraid to miss something when watching an omnidirectional movie where attention-guiding techniques have been applied. In recent years, there has been a lot of research about attention guiding in images, movies, virtual reality, augmented reality and also in CVR. We classify these methods and offer a taxonomy for attention-guiding methods. Discussing the different characteristics, we elaborate the advantages and disadvantages, give recommendations for use cases and apply the taxonomy to several examples of guiding methods.


Author(s):  
Davide Calandra ◽  
Alberto Cannavò ◽  
Fabrizio Lamberti

AbstractAugmented reality (AR) has a number of applications in industry, but remote assistance represents one of the most prominent and widely studied use cases. Notwithstanding, although the set of functionalities supporting the communication between remote experts and on-site operators grew over time, the way in which remote assistance is delivered has not evolved yet to unleash the full potential of AR technology. The expert typically guides the operator step-by-step, and basically uses AR-based hints to visually support voice instructions. With this approach, skilled human resources may go under-utilized, as the time an expert invests in the assistance corresponds to the time needed by the operator to execute the requested operations. The goal of this work is to introduce a new approach to remote assistance that takes advantage of AR functionalities separately proposed in academic works and commercial products to re-organize the guidance workflow, with the aim to increase the operator’s autonomy and, thus, optimize the use of expert’s time. An AR-powered remote assistance platform able to support the devised approach is also presented. By means of a user study, this approach was compared to traditional step-by-step guidance, with the aim to estimate what is the potential of AR that is still unexploited. Results showed that with the new approach it is possible to reduce the time investment for the expert, allowing the operator to autonomously complete the assigned tasks in a time comparable to step-by-step guidance with a negligible need for further support.


Sign in / Sign up

Export Citation Format

Share Document