scholarly journals Research on Multiperson Motion Capture System Combining Target Positioning and Inertial Attitude Sensing Technology

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yifei Wang ◽  
Yongsheng Wang

The purpose of this study is to solve the problems of multiple targets, poor accuracy, and inability to obtain displacement information in motion capture. Based on fusion target positioning and inertial attitude sensing technology, Unity3D is employed to create 3D scenes and 3D human body models to read real-time raw data from inertial sensors. Furthermore, a gesture fusion algorithm is used to process the raw data in real time to generate a quaternion, and a human motion capture system is designed based on inertial sensors for the complete movement information recording of the capture target. Results demonstrate that the developed system can accurately capture multiple moving targets and provide a higher recognition rate, reaching 75%∼100%. The maximum error of the system adopting the fusion target positioning algorithm is 10 cm, a reduction of 71.24% compared with that not using the fusion algorithm. The movements of different body parts are analyzed through example data. The recognition efficiency of “wave,” “crossover,” “pick things up,” “walk,” and “squat down” is as high as 100%. Hence, the proposed multiperson motion capture system that combines target positioning and inertial attitude sensing technology can provide better performance. The results are of great significance to promote the development of industries such as animation, medical care, games, and sports training.

Author(s):  
Xiangyang Li ◽  
Zhili Zhang ◽  
Feng Liang ◽  
Qinhe Gao ◽  
Lilong Tan

Aiming at the human–computer interaction control (HCIC) requirements of multi operators in collaborative virtual maintenance (CVM), real-time motion capture and simulation drive of multi operators with optical human motion capture system (HMCS) is proposed. The detailed realization process of real-time motion capture and data drive for virtual operators in CVM environment is presented to actualize the natural and online interactive operations. In order to ensure the cooperative and orderly interactions of virtual operators with the input operations of actual operators, collaborative HCIC model is established according to specific planning, allocating and decision-making of different maintenance tasks as well as the human–computer interaction features and collaborative maintenance operation features among multi maintenance trainees in CVM process. Finally, results of the experimental implementation validate the effectiveness and practicability of proposed methods, models, strategies and mechanisms.


2013 ◽  
Vol 650 ◽  
pp. 518-522
Author(s):  
Juan Xiao

Main characteristics of recent human motion capture systems are analyzed in the paper firstly. Based on that, a new multi-user aerobics wireless human motion capture system based on MEMS is proposed. Design of its framework and core technology solutions including large-scale data obtain, multi-hop wireless sensor and high-frequency real-time transmission are put forward. Finally, three-dimensional real-time reconstructions of the multi-user aerobics wireless motion capture system are showed in the paper.


2020 ◽  
Author(s):  
Lan-Da Van ◽  
Ling-Yan Zhang ◽  
Chun-Hao Chang ◽  
Kit-Lun Tong ◽  
Kun-Ru Wu ◽  
...  

Abstract Drones have been applied to a wide range of security and surveillance applications recently. With drones, Internet of Things are extending to3D space. An interesting question is: Can we conduct person identification(PID) in a drone view? Traditional PID technologies such as RFID and fingerprint/iris/face recognition have their limitations or require close contactto specific devices. Hence, these traditional technologies can not be easily deployed to drones due to dynamic change of view angle and height. In this work,we demonstrate how to retrieve IoT data from users’ and correctly tag themon the human objects captured by a drone camera to identify and track groundhuman objects. First, we retrieve human objects from videos and conduct coordination transformation to handle the change of drone positions. Second,a fusion algorithm is applied to measure the correlation of video data andinertial data based on the extracted human motion features. Finally, we cancouple human objects with their wearable IoT devices, achieving our goal oftagging wearable device data such as personal profiles) on human objects ina drone view. Our experimental evaluation shows a recognition rate of 98.9%.To the best of our knowledge, this is the first work integrating videos fromdrone cameras and IoT data from inertial sensors.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Lan-Da Van ◽  
Ling-Yan Zhang ◽  
Chun-Hao Chang ◽  
Kit-Lun Tong ◽  
Kun-Ru Wu ◽  
...  

AbstractDrones have been applied to a wide range of security and surveillance applications recently. With drones, Internet of Things are extending to 3D space. An interesting question is: Can we conduct person identification (PID) in a drone view? Traditional PID technologies such as RFID and fingerprint/iris/face recognition have their limitations or require close contact to specific devices. Hence, these traditional technologies can not be easily deployed to drones due to dynamic change of view angle and height. In this work, we demonstrate how to retrieve IoT data from users’ wearables and correctly tag them on the human objects captured by a drone camera to identify and track ground human objects. First, we retrieve human objects from videos and conduct coordination transformation to handle the change of drone positions. Second, a fusion algorithm is applied to measure the correlation of video data and inertial data based on the extracted human motion features. Finally, we can couple human objects with their wearable IoT devices, achieving our goal of tagging wearable device data (such as personal profiles) on human objects in a drone view. Our experimental evaluation shows a recognition rate of 99.5% for varying walking paths, and 98.6% when the drone’s camera angle is within 37°. To the best of our knowledge, this is the first work integrating videos from drone cameras and IoT data from inertial sensors.


Sign in / Sign up

Export Citation Format

Share Document