A Body Model Server for Human Motion Capture and Representation

1996 ◽  
Vol 5 (4) ◽  
pp. 381-392 ◽  
Author(s):  
Joshua Bers

This paper presents a body model server (BMS) that provides real-time access to the position and posture of a person's torso, arms, hands, head, and eyes. It can be accessed by clients over a network. The BMS is designed to function as a device-independent data-layer between the sensing devices and client applications that require real-time human motion data, such as animation control. It can provide clients with accurate information at up to 40 Hz. For data collection, the model uses four magnetic position/ orientation sensors, two data-gloves, and an eye-tracker. The BMS combines the data-streams from the sensors and transforms them into snapshots of the user's upper-body pose. A geometric model made up of joints and segments structures the input. Posture of the body is represented by joint angles. Two unique characteristics of our approach are the use of the implicit, geometric constraints of the sensed body to simplify the computation of the unmeasured joint angles, and the use of time-stamped data that allow synchronization with other data streams, e.g., speech input. This paper describes the architecture of the BMS, including the management of multiple input devices, the representation and computation of the position and joint angle data, and the client-server interface.

Author(s):  
Hyun-Jung Kwon ◽  
Hyun-Joon Chung ◽  
Yujiang Xiang

The objective of this study was to develop a discomfort function for including a high DOF upper body model during walking. A multi-objective optimization (MOO) method was formulated by minimizing dynamic effort and the discomfort function simultaneously. The discomfort function is defined as the sum of the squares of deviation of joint angles from their neutral angle positions. The dynamic effort is the sum of the joint torque squared. To investigate the efficacy of the proposed MOO method, backward walking simulation was conducted. By minimizing both dynamic effort and the discomfort function, a 3D whole body model with a high DOF upper body for walking was demonstrated successfully.


Sensor Review ◽  
2017 ◽  
Vol 37 (1) ◽  
pp. 101-109 ◽  
Author(s):  
Ye Chen ◽  
Zhelong Wang

Purpose Existing studies on human activity recognition using inertial sensors mainly discuss single activities. However, human activities are rather concurrent. A person could be walking while brushing their teeth or lying while making a call. The purpose of this paper is to explore an effective way to recognize concurrent activities. Design/methodology/approach Concurrent activities usually involve behaviors from different parts of the body, which are mainly dominated by the lower limbs and upper body. For this reason, a hierarchical method based on artificial neural networks (ANNs) is proposed to classify them. At the lower level, the state of the lower limbs to which a concurrent activity belongs is firstly recognized by means of one ANN using simple features. Then, the upper-level systems further distinguish between the upper limb movements and infer specific concurrent activity using features processed by the principle component analysis. Findings An experiment is conducted to collect realistic data from five sensor nodes placed on subjects’ wrist, arm, thigh, ankle and chest. Experimental results indicate that the proposed hierarchical method can distinguish between 14 concurrent activities with a high classification rate of 92.6 per cent, which significantly outperforms the single-level recognition method. Practical implications In the future, the research may play an important role in many ways such as daily behavior monitoring, smart assisted living, postoperative rehabilitation and eldercare support. Originality/value To provide more accurate information on people’s behaviors, human concurrent activities are discussed and effectively recognized by using a hierarchical method.


Robotica ◽  
2001 ◽  
Vol 19 (6) ◽  
pp. 601-610 ◽  
Author(s):  
Jihong Lee ◽  
Insoo Ha

In this paper we propose a set of techniques for a real-time motion capture of a human body. The proposed motion capture system is based on low cost accelerometers, and is capable of identifying the body configuration by extracting gravity-related terms from the sensor data. One sensor unit is composed of 3 accelerometers arranged orthogonally to each other, and is capable of identifying 2 rotating angles of joints with 2 degrees of freedom. A geometric fusion technique is applied to cope with the uncertainty of sensor data. A practical calibration technique is also proposed to handle errors in aligning the sensing axis to the coordination axis. In the case where motion acceleration is not negligible compared with gravity acceleration, a compensation technique to extract gravity acceleration from the sensor data is proposed. Experimental results not only for individual techniques but also for human motion capturing with graphics are included.


2021 ◽  
Author(s):  
Patrick Slade ◽  
Ayman Habib ◽  
Jennifer L. Hicks ◽  
Scott L. Delp

AbstractAnalyzing human motion is essential for diagnosing movement disorders and guiding rehabilitation interventions for conditions such as osteoarthritis, stroke, and Parkinson’s disease. Optical motion capture systems are the current standard for estimating kinematics but require expensive equipment located in a predefined space. While wearable sensor systems can estimate kinematics in any environment, existing systems are generally less accurate than optical motion capture. Further, many wearable sensor systems require a computer in close proximity and rely on proprietary software, making it difficult for researchers to reproduce experimental findings. Here, we present OpenSenseRT, an open-source and wearable system that estimates upper and lower extremity kinematics in real time by using inertial measurement units and a portable microcontroller. We compared the OpenSenseRT system to optical motion capture and found an average RMSE of 4.4 degrees across 5 lower-limb joint angles during three minutes of walking (n = 5) and an average RMSE of 5.6 degrees across 8 upper extremity joint angles during a Fugl-Meyer task (n = 5). The open-source software and hardware are scalable, tracking between 1 and 14 body segments, with one sensor per segment. Kinematics are estimated in real-time using a musculoskeletal model and inverse kinematics solver. The computation frequency, depends on the number of tracked segments, but is sufficient for real-time measurement for many tasks of interest; for example, the system can track up to 7 segments at 30 Hz in real-time. The system uses off-the-shelf parts costing approximately $100 USD plus $20 for each tracked segment. The OpenSenseRT system is accurate, low-cost, and simple to replicate, enabling movement analysis in labs, clinics, homes, and free-living settings.


Author(s):  
Derek Lura ◽  
Stephanie Carey ◽  
Rajiv Dubey

This paper details an automated process to create a robotic model of a subject’s upper body using motion analysis data of a subject performing simple range of motion (RoM) tasks. The upper body model was created by calculating subject specific kinematics using functional joint center (FJC) methods, this makes the model highly accurate. The subjects’ kinematics were then used to find robotic parameters. This allowed the robotic model to be calculated directly from motion analysis data. The RoM tasks provide the joint motion necessary to ensure the accuracy of the FJC method. Model creation was tested using five healthy adult male subjects, with data collected using an eight camera Vicon© (Oxford, UK) motion analysis system. Common anthropometric measures were also taken manually for comparison to the FJC kinematic measures calculated from marker position data. The algorithms successfully generated models for each subject based on the recorded RoM task data. Analysis of the generated model parameters relative to the manual measures was performed to determine the correlations. Methods for replacing model parameters extracted from the motion analysis data with hand measurements are presented. The accuracy of the model generating algorithm was tested by reconstructing motion using the parameters and joint angles extracted from the RoM tasks data, correlated manual measurements, and height based correlations from literature data. Error was defined as the average difference between the recorded position and reconstructed positions and orientations of the hand. For all of the tested subjects the model generated using the RoM tasks data showed least average error over the tested trials. Each of the tested results were significantly different in position error with the FJC generated model being the most accurate, followed by the correlated measurement data, and finally the height based calculations. No difference was found between the end effector orientation of generated models. The models developed in this study will be used with additional subject tasks in order to better predict human motion.


2017 ◽  
Vol 14 (01) ◽  
pp. 1650025
Author(s):  
Hyun-Jung Kwon ◽  
Hyun-Joon Chung ◽  
Yujiang Xiang

To predict the 3D walking pattern of a human, a detailed upper body model that includes the spine, shoulders, and neck must be made, which is challenging because of the coupling relations of degrees of freedom (DOF) in these body sections. The objective of this study was to develop a discomfort function for including a high DOF upper body model during walking. A multi-objective optimization (MOO) method was formulated by minimizing dynamic effort (DE) and the discomfort function simultaneously. The discomfort function is defined as the sum of the squares of deviation of joint angles from their neutral angle positions. The neutral angle position is defined as a relaxed human posture without actively applied external forces. The DE is the sum of the joint torque squared. To illustrate the capability of including a high DOF upper body, backward walking is used as an example. By minimizing both DE and the discomfort function, a 3D whole-body model with a high DOF upper body for walking was simulated successfully. The proposed MOO is a promising human performance measure to predict human motion using a high DOF upper body with full range of motion. This has been demonstrated by simulating backward walking, lifting, and ingress motions.


2012 ◽  
Vol 09 (02) ◽  
pp. 1250010 ◽  
Author(s):  
KENG PENG TEE ◽  
RUI YAN ◽  
YUANWEI CHUA ◽  
ZHIYONG HUANG ◽  
HAIZHOU LI

A method of computing humanoid robot joint angles from human motion data is presented in this paper. The proposed method groups the motors of an upper-body humanoid robot into pan-tilt and spherical modules, solves the inverse kinematics (IK) problem for each module, and finally resolves the coordinate transformations among the modules to yield the full solution. Scaling of the obtained joint angles and velocities is performed to ensure that their limits are satisfied and their smoothness preserved. To address robustness-accuracy tradeoff when handling kinematic singularity, we design an adaptive regularization parameter that is active only when the robot is operating near any singular configuration. This adaptive algorithm is provably robust and is simple and fast to compute. Simulation on a seven degree-of-freedom (DOF) robot arm shows that tracking accuracy is slightly reduced in a neighborhood of a singularity to gain robustness, but high accuracy is recovered outside this neighborhood. Experimentation on a 17-DOF upper-body humanoid robot confirms that user-demonstrated gestures are closely imitated by the robot. The proposed method outperforms state-of-the-art Jacobian-based IK in terms of overall imitation accuracy, while guaranteeing robust and smoothly scaled trajectories. It is ideally suited for applications such as humanoid robot teleoperation or programming by demonstration.


Author(s):  
LAKSHMI PRANEETHA

Now-a-days data streams or information streams are gigantic and quick changing. The usage of information streams can fluctuate from basic logical, scientific applications to vital business and money related ones. The useful information is abstracted from the stream and represented in the form of micro-clusters in the online phase. In offline phase micro-clusters are merged to form the macro clusters. DBSTREAM technique captures the density between micro-clusters by means of a shared density graph in the online phase. The density data in this graph is then used in reclustering for improving the formation of clusters but DBSTREAM takes more time in handling the corrupted data points In this paper an early pruning algorithm is used before pre-processing of information and a bloom filter is used for recognizing the corrupted information. Our experiments on real time datasets shows that using this approach improves the efficiency of macro-clusters by 90% and increases the generation of more number of micro-clusters within in a short time.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Myoung Hoon Jung ◽  
Kak Namkoong ◽  
Yeolho Lee ◽  
Young Jun Koh ◽  
Kunsun Eom ◽  
...  

AbstractBioelectrical impedance analysis (BIA) is used to analyze human body composition by applying a small alternating current through the body and measuring the impedance. The smaller the electrode of a BIA device, the larger the impedance measurement error due to the contact resistance between the electrode and human skin. Therefore, most commercial BIA devices utilize electrodes that are large enough (i.e., 4 × 1400 mm2) to counteract the contact resistance effect. We propose a novel method of compensating for contact resistance by performing 4-point and 2-point measurements alternately such that body impedance can be accurately estimated even with considerably smaller electrodes (outer electrodes: 68 mm2; inner electrodes: 128 mm2). Additionally, we report the use of a wrist-wearable BIA device with single-finger contact measurement and clinical test results from 203 participants at Seoul St. Mary’s Hospital. The correlation coefficient and standard error of estimate of percentage body fat were 0.899 and 3.76%, respectively, in comparison with dual-energy X-ray absorptiometry. This result exceeds the performance level of the commercial upper-body portable body fat analyzer (Omron HBF-306). With a measurement time of 7 s, this sensor technology is expected to provide a new possibility of a wearable bioelectrical impedance analyzer, toward obesity management.


Author(s):  
Jahwan Koo ◽  
Nawab Muhammad Faseeh Qureshi ◽  
Isma Farah Siddiqui ◽  
Asad Abbas ◽  
Ali Kashif Bashir

Abstract Real-time data streaming fetches live sensory segments of the dataset in the heterogeneous distributed computing environment. This process assembles data chunks at a rapid encapsulation rate through a streaming technique that bundles sensor segments into multiple micro-batches and extracts into a repository, respectively. Recently, the acquisition process is enhanced with an additional feature of exchanging IoT devices’ dataset comprised of two components: (i) sensory data and (ii) metadata. The body of sensory data includes record information, and the metadata part consists of logs, heterogeneous events, and routing path tables to transmit micro-batch streams into the repository. Real-time acquisition procedure uses the Directed Acyclic Graph (DAG) to extract live query outcomes from in-place micro-batches through MapReduce stages and returns a result set. However, few bottlenecks affect the performance during the execution process, such as (i) homogeneous micro-batches formation only, (ii) complexity of dataset diversification, (iii) heterogeneous data tuples processing, and (iv) linear DAG workflow only. As a result, it produces huge processing latency and the additional cost of extracting event-enabled IoT datasets. Thus, the Spark cluster that processes Resilient Distributed Dataset (RDD) in a fast-pace using Random access memory (RAM) defies expected robustness in processing IoT streams in the distributed computing environment. This paper presents an IoT-enabled Directed Acyclic Graph (I-DAG) technique that labels micro-batches at the stage of building a stream event and arranges stream elements with event labels. In the next step, heterogeneous stream events are processed through the I-DAG workflow, which has non-linear DAG operation for extracting queries’ results in a Spark cluster. The performance evaluation shows that I-DAG resolves homogeneous IoT-enabled stream event issues and provides an effective stream event heterogeneous solution for IoT-enabled datasets in spark clusters.


Sign in / Sign up

Export Citation Format

Share Document