scholarly journals Gesture Recognition in Augmented Reality Assisted Assembly Training

2019 ◽  
Vol 1176 ◽  
pp. 032030 ◽  
Author(s):  
Jiaqi Dong ◽  
Zisheng Tang ◽  
Qunfei Zhao
2021 ◽  
Vol 11 (21) ◽  
pp. 9789
Author(s):  
Jiaqi Dong ◽  
Zeyang Xia ◽  
Qunfei Zhao

Augmented reality assisted assembly training (ARAAT) is an effective and affordable technique for labor training in the automobile and electronic industry. In general, most tasks of ARAAT are conducted by real-time hand operations. In this paper, we propose an algorithm of dynamic gesture recognition and prediction that aims to evaluate the standard and achievement of the hand operations for a given task in ARAAT. We consider that the given task can be decomposed into a series of hand operations and furthermore each hand operation into several continuous actions. Then, each action is related with a standard gesture based on the practical assembly task such that the standard and achievement of the actions included in the operations can be identified and predicted by the sequences of gestures instead of the performance throughout the whole task. Based on the practical industrial assembly, we specified five typical tasks, three typical operations, and six standard actions. We used Zernike moments combined histogram of oriented gradient and linear interpolation motion trajectories to represent 2D static and 3D dynamic features of standard gestures, respectively, and chose the directional pulse-coupled neural network as the classifier to recognize the gestures. In addition, we defined an action unit to reduce the dimensions of features and computational cost. During gesture recognition, we optimized the gesture boundaries iteratively by calculating the score probability density distribution to reduce interferences of invalid gestures and improve precision. The proposed algorithm was evaluated on four datasets and proved to increase recognition accuracy and reduce the computational cost from the experimental results.


Author(s):  
Zeenat S. AlKassim ◽  
Nader Mohamed

In this chapter, the authors discuss a unique technology known as the Sixth Sense Technology, highlighting the future opportunities of such technology in integrating the digital world with the real world. Challenges in implementing such technologies are also discussed along with a review of the different possible implementation approaches. This review is performed by exploring the different inventions in areas similar to the Sixth Sense Technology, namely augmented reality (AR), computer vision, image processing, gesture recognition, and artificial intelligence and then categorizing and comparing between them. Lastly, recommendations are discussed for improving such a unique technology that has the potential to create a new trend in human-computer interaction (HCI) in the coming years.


2012 ◽  
Vol 7 (1) ◽  
pp. 468-472
Author(s):  
Yimin Chen ◽  
Qiming Li ◽  
Chen Huang ◽  
Congli Ye ◽  
Yun Li ◽  
...  

Author(s):  
Jerzy Roslon ◽  
Aleksander Nicał ◽  
Pawel Nowak

2021 ◽  
Author(s):  
Christian Giesser ◽  
Christian Gibas ◽  
Armin Gruenewald ◽  
Tanja Joan Eiler ◽  
Vanessa Schmuecker ◽  
...  

Author(s):  
Rafael Radkowski ◽  
Christian Stritzke

This paper presents a comparison between 2D and 3D interaction techniques for Augmented Reality (AR) applications. The interaction techniques are based on hand gestures and a computer vision-based hand gesture recognition system. We have compared 2D gestures and 3D gestures for interaction in AR application. The 3D recognition system is based on a video camera, which provides an additional depth image to each 2D color image. Thus, spatial interactions become possible. Our major question during this work was: Do depth images and 3D interaction techniques improve the interaction with AR applications, respectively with virtual 3D objects? Therefore, we have tested and compared the hand gesture recognition systems. The results show two things: First, they show that the depth images facilitate a more robust hand recognition and gesture identification. Second, the results are a strong indication that 3D hand gesture interactions techniques are more intuitive than 2D hand gesture interaction techniques. In summary the results emphasis, that depth images improve the hand gesture interaction for AR applications.


Author(s):  
Sven Kreft ◽  
Ju¨rgen Gausemeier ◽  
Carsten Matysczok

Today, ubiquitous available information is an increasing success factor of industrial enterprises. Mobile Computing allows to manually accessing information, independent from the user’s current location. An additional technology in this context is Wearable Computing. It supports mobile activities by automatically (context-sensitively) gathering and presenting relevant information to the user. Within the wearIT@work project several Wearable Computing applications have been developed in order to demonstrate the overall benefit and maturity of this technology. However, these Wearable Computing applications display information in form of simple text or video. In contrast, Augmented Reality (AR) uses interactive 3D-objects to facilitate the user’s understanding of complex tasks. Combining both technologies in order to exploit their particular capabilities seems promising; not at last, since on a general level differences in their basic technologies can be unveiled hardly. In this paper, we propose a systematic approach to enhance Wearable Computing applications with Augmented Reality functionalities. Thereby, the necessary decision making and development processes are standardized and simplified. The approach has been applied to an existing Wearable Computing application in the field of automotive assembly training. We followed the proposed phases resulting in an economically reasonable concept for a Wearable Augmented Reality system that facilitates the trainee’s understanding of complex assembly tasks.


Sign in / Sign up

Export Citation Format

Share Document