ARFAT - THE AUGMENTED REALITY FORMWORK ASSEMBLY TRAINING

Author(s):  
Jerzy Roslon ◽  
Aleksander Nicał ◽  
Pawel Nowak
Author(s):  
Sven Kreft ◽  
Ju¨rgen Gausemeier ◽  
Carsten Matysczok

Today, ubiquitous available information is an increasing success factor of industrial enterprises. Mobile Computing allows to manually accessing information, independent from the user’s current location. An additional technology in this context is Wearable Computing. It supports mobile activities by automatically (context-sensitively) gathering and presenting relevant information to the user. Within the wearIT@work project several Wearable Computing applications have been developed in order to demonstrate the overall benefit and maturity of this technology. However, these Wearable Computing applications display information in form of simple text or video. In contrast, Augmented Reality (AR) uses interactive 3D-objects to facilitate the user’s understanding of complex tasks. Combining both technologies in order to exploit their particular capabilities seems promising; not at last, since on a general level differences in their basic technologies can be unveiled hardly. In this paper, we propose a systematic approach to enhance Wearable Computing applications with Augmented Reality functionalities. Thereby, the necessary decision making and development processes are standardized and simplified. The approach has been applied to an existing Wearable Computing application in the field of automotive assembly training. We followed the proposed phases resulting in an economically reasonable concept for a Wearable Augmented Reality system that facilitates the trainee’s understanding of complex assembly tasks.


2011 ◽  
Vol 121-126 ◽  
pp. 4300-4304
Author(s):  
Yu Chuan Liu ◽  
Ming Cong

This paper proposes an interactive augmented reality (AR) architecture for online computer supported collaborative work (CSCW). Current CSCW applications for AR are focused on multiple users met together to collaborate on the same physical environment. The proposed architecture can provide collaboration through computer network without the need of face-to-face communication. System architecture and operation scenario are firstly constructed. AR applications for machine assembly trainingis studied. The proposed interactive AR system architecture can provide new experiences for users and both the effectiveness and efficiency for online training applications can be significantly improved.


2021 ◽  
Vol 11 (21) ◽  
pp. 9789
Author(s):  
Jiaqi Dong ◽  
Zeyang Xia ◽  
Qunfei Zhao

Augmented reality assisted assembly training (ARAAT) is an effective and affordable technique for labor training in the automobile and electronic industry. In general, most tasks of ARAAT are conducted by real-time hand operations. In this paper, we propose an algorithm of dynamic gesture recognition and prediction that aims to evaluate the standard and achievement of the hand operations for a given task in ARAAT. We consider that the given task can be decomposed into a series of hand operations and furthermore each hand operation into several continuous actions. Then, each action is related with a standard gesture based on the practical assembly task such that the standard and achievement of the actions included in the operations can be identified and predicted by the sequences of gestures instead of the performance throughout the whole task. Based on the practical industrial assembly, we specified five typical tasks, three typical operations, and six standard actions. We used Zernike moments combined histogram of oriented gradient and linear interpolation motion trajectories to represent 2D static and 3D dynamic features of standard gestures, respectively, and chose the directional pulse-coupled neural network as the classifier to recognize the gestures. In addition, we defined an action unit to reduce the dimensions of features and computational cost. During gesture recognition, we optimized the gesture boundaries iteratively by calculating the score probability density distribution to reduce interferences of invalid gestures and improve precision. The proposed algorithm was evaluated on four datasets and proved to increase recognition accuracy and reduce the computational cost from the experimental results.


ASHA Leader ◽  
2013 ◽  
Vol 18 (9) ◽  
pp. 14-14 ◽  
Keyword(s):  

Amp Up Your Treatment With Augmented Reality


2003 ◽  
Vol 15 (2) ◽  
pp. 141-156 ◽  
Author(s):  
eve Coste-Maniere ◽  
Louai Adhami ◽  
Fabien Mourgues ◽  
Alain Carpentier

Sign in / Sign up

Export Citation Format

Share Document