Comparing Real-Time Human Motion Capture System Using Inertial Sensors with Microsoft Kinect

Author(s):  
Chengkai Xiang ◽  
Hui Huang Hsu ◽  
Wu Yuin Hwang ◽  
Jianhua Ma
Author(s):  
Xiangyang Li ◽  
Zhili Zhang ◽  
Feng Liang ◽  
Qinhe Gao ◽  
Lilong Tan

Aiming at the human–computer interaction control (HCIC) requirements of multi operators in collaborative virtual maintenance (CVM), real-time motion capture and simulation drive of multi operators with optical human motion capture system (HMCS) is proposed. The detailed realization process of real-time motion capture and data drive for virtual operators in CVM environment is presented to actualize the natural and online interactive operations. In order to ensure the cooperative and orderly interactions of virtual operators with the input operations of actual operators, collaborative HCIC model is established according to specific planning, allocating and decision-making of different maintenance tasks as well as the human–computer interaction features and collaborative maintenance operation features among multi maintenance trainees in CVM process. Finally, results of the experimental implementation validate the effectiveness and practicability of proposed methods, models, strategies and mechanisms.


2013 ◽  
Vol 650 ◽  
pp. 518-522
Author(s):  
Juan Xiao

Main characteristics of recent human motion capture systems are analyzed in the paper firstly. Based on that, a new multi-user aerobics wireless human motion capture system based on MEMS is proposed. Design of its framework and core technology solutions including large-scale data obtain, multi-hop wireless sensor and high-frequency real-time transmission are put forward. Finally, three-dimensional real-time reconstructions of the multi-user aerobics wireless motion capture system are showed in the paper.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yifei Wang ◽  
Yongsheng Wang

The purpose of this study is to solve the problems of multiple targets, poor accuracy, and inability to obtain displacement information in motion capture. Based on fusion target positioning and inertial attitude sensing technology, Unity3D is employed to create 3D scenes and 3D human body models to read real-time raw data from inertial sensors. Furthermore, a gesture fusion algorithm is used to process the raw data in real time to generate a quaternion, and a human motion capture system is designed based on inertial sensors for the complete movement information recording of the capture target. Results demonstrate that the developed system can accurately capture multiple moving targets and provide a higher recognition rate, reaching 75%∼100%. The maximum error of the system adopting the fusion target positioning algorithm is 10 cm, a reduction of 71.24% compared with that not using the fusion algorithm. The movements of different body parts are analyzed through example data. The recognition efficiency of “wave,” “crossover,” “pick things up,” “walk,” and “squat down” is as high as 100%. Hence, the proposed multiperson motion capture system that combines target positioning and inertial attitude sensing technology can provide better performance. The results are of great significance to promote the development of industries such as animation, medical care, games, and sports training.


Author(s):  
Muhammad Yahya ◽  
Jawad Ali Shah ◽  
Arif Warsi ◽  
Kushsairy Kadir ◽  
Sheroz Khan ◽  
...  

The use of motion capture has increased from last decade in a varied spectrum of applications like film special effects, controlling games and robots, rehabilitation system, animations etc. The current human motion capture techniques use markers, structured environment, and high resolution cameras in a dedicated environment. Because of rapid movement, elbow angle estimation is observed as the most difficult problem in human motion capture system. In this paper, we take elbow angle estimation as our research subject and propose a novel, markerless and cost-effective solution that uses RGB camera for estimating elbow angle in real time using part affinity field. We have recruited five (5) participants of (height, 168 ± 8 cm; mass, 61 ± 17 kg) to perform cup to mouth movement and at the same time measured the angle by both RGB camera and Microsoft Kinect. The experimental results illustrate that markerless and cost-effective RGB camera has a median RMS errors of 3.06° and 0.95° in sagittal and coronal plane respectively as compared to Microsoft Kinect.


Author(s):  
jie li ◽  
Zhe-long Wang ◽  
Hongyu Zhao ◽  
Raffael Gravina ◽  
Giancarlo Fortino ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document