pose tracking
Recently Published Documents


TOTAL DOCUMENTS

465
(FIVE YEARS 137)

H-INDEX

24
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Qingtang Zhu ◽  
Jingyuan Fan ◽  
Fanbin Gu ◽  
Lulu Lv ◽  
Zhejin Zhang ◽  
...  

Abstract Background: Range of motion (ROM) measurements are essential for diagnosing and evaluating upper extremity conditions. Clinical goniometry is the most commonly used methods but it is time-consuming and skill-demanding. Recent advances in human tracking algorithm suggest potential for automatic angle measuring from RGB images. It provides an attractive alternative for at-distance measuring. However, the reliability of this method has not been fully established. The purpose of this study is to evaluate if the results of algorithm are as reliable as human raters in upper limb movements.Methods: Thirty healthy young adults (20 males, 10 females) participated in this study. Participants were asked to performed a 6-motion task including movement of shoulder, elbow and wrist. Images of movements were capture by commercial digital camera. Each movement was measured by a pose tracking algorithm and compared with the surgeon-measurement results. The mean differences between the two measurements were compared. Pearson correlation coefficients were used to determine the relationship. Reliability was investigated by the intra-class correlation coefficients.Results: Comparing this algorithm-based method with manual measurement, the mean differences were less than 3 degrees in 5 motions (shoulder abduction: 0.51; shoulder elevation: 2.87; elbow flexion:0.38; elbow extension:0.65; wrist extension: 0.78) except wrist flexion. All the intra-class correlation coefficients were larger than 0.60. The Pearson coefficients also showed high correlations between the two measurements (p<0.001). Conclusions: Our results indicated that pose estimation is a reliable method to measure the shoulder and elbow angles, supporting RGB images for measuring joint ROM. Our results proved the possibility that patients can assess their ROM by photos taken by a digital camera.Trial registration: This study was registered in the Clinical Trials Center of The First Affiliated Hospital, Sun Yat-sen University (2021-387).


2022 ◽  
Vol 24 (1) ◽  
Author(s):  
Xudong Guo ◽  
Shengnan Li ◽  
Youguo Hao ◽  
Zhongyu Luo ◽  
Xiangci Yan

2022 ◽  
Vol 8 ◽  
Author(s):  
Elsa J. Harris ◽  
I-Hung Khoo ◽  
Emel Demircan

We performed an electronic database search of published works from 2012 to mid-2021 that focus on human gait studies and apply machine learning techniques. We identified six key applications of machine learning using gait data: 1) Gait analysis where analyzing techniques and certain biomechanical analysis factors are improved by utilizing artificial intelligence algorithms, 2) Health and Wellness, with applications in gait monitoring for abnormal gait detection, recognition of human activities, fall detection and sports performance, 3) Human Pose Tracking using one-person or multi-person tracking and localization systems such as OpenPose, Simultaneous Localization and Mapping (SLAM), etc., 4) Gait-based biometrics with applications in person identification, authentication, and re-identification as well as gender and age recognition 5) “Smart gait” applications ranging from smart socks, shoes, and other wearables to smart homes and smart retail stores that incorporate continuous monitoring and control systems and 6) Animation that reconstructs human motion utilizing gait data, simulation and machine learning techniques. Our goal is to provide a single broad-based survey of the applications of machine learning technology in gait analysis and identify future areas of potential study and growth. We discuss the machine learning techniques that have been used with a focus on the tasks they perform, the problems they attempt to solve, and the trade-offs they navigate.


Author(s):  
Suibin Huang ◽  
Kun Yang ◽  
Hua Xiao ◽  
Peng Han ◽  
Jian Qiu ◽  
...  

Author(s):  
Andreas Blank ◽  
Engin Karlidag ◽  
Lukas Zikeli ◽  
Maximilian Metzner ◽  
Jörg Franke

AbstractConcurrent with autonomous robots, teleoperation gains importance in industrial applications. This includes human–robot cooperation during complex or harmful operations and remote intervention. A key role in teleoperation is the ability to translate operator inputs to robot movements. Therefore, providing different motion control types is a decisive aspect due to the variety of tasks to be expected. For a wide range of use-cases, a high degree of interoperability to a variety of robot systems is required. In addition, the control input should support up-to-date Human Machine Interfaces. To address the existing challenges, we present a middleware for teleoperation of industrial robots, which is adaptive regarding motion control types. Thereby the middleware relies on an open-source, robot meta-operating system and a standardized communication. Evaluation is performed within defined tasks utilizing different articulated robots, whereby performance and determinacy are quantified. An implementation sample of the method is available on: https://github.com/FAU-FAPS/adaptive_motion_control.


Author(s):  
Yili Ren ◽  
Zi Wang ◽  
Sheng Tan ◽  
Yingying Chen ◽  
Jie Yang

WiFi human sensing has become increasingly attractive in enabling emerging human-computer interaction applications. The corresponding technique has gradually evolved from the classification of multiple activity types to more fine-grained tracking of 3D human poses. However, existing WiFi-based 3D human pose tracking is limited to a set of predefined activities. In this work, we present Winect, a 3D human pose tracking system for free-form activity using commodity WiFi devices. Our system tracks free-form activity by estimating a 3D skeleton pose that consists of a set of joints of the human body. In particular, we combine signal separation and joint movement modeling to achieve free-form activity tracking. Our system first identifies the moving limbs by leveraging the two-dimensional angle of arrival of the signals reflected off the human body and separates the entangled signals for each limb. Then, it tracks each limb and constructs a 3D skeleton of the body by modeling the inherent relationship between the movements of the limb and the corresponding joints. Our evaluation results show that Winect is environment-independent and achieves centimeter-level accuracy for free-form activity tracking under various challenging environments including the none-line-of-sight (NLoS) scenarios.


Sign in / Sign up

Export Citation Format

Share Document