scholarly journals Mathematical function data model analysis and synthesis system based on short-term human movement

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Xuemei An ◽  
Rui Yang ◽  
Daniyal M. Alghazzawi ◽  
Nympha Rita Joseph

Abstract The paper proposes a data model analysis algorithm for human motion function based on short-term behaviour. The algorithm uses a functional data analysis (FDA) method to perform Fourier fitting on the motion data and extract the fitted approximate single period data. Finally, the algorithm depicts the internal change in the motion in the low-dimensional space. The study found that the characteristic motion data obtained by the algorithm has smooth characteristics, and the relevant case analysis also verifies the algorithm's effectiveness.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Yunlong Ma ◽  
Sanaa Sharaf ◽  
Basel Jamal Ali

Abstract The article proposes a human motion capture method based on operational data. The thesis first uses the human body wear system to perform functional processing on the captured periodic motion data, and then extracts the data sequence for the few motions. Thereafter, the classification of the vector calculation method is carried out according to the characteristics of periodic data. Through experimental research, it is found that the functional data analysis (FDA) algorithm proposed in the thesis can accurately identify human motion behaviour, and the automatically collected data has a recognition rate that is as high as 98.9%. Therefore, we have concluded that the human body data functional analysis algorithm has higher recognition accuracy than the traditional optical capture system. Thus, it is worthy of further research and discussion.


Author(s):  
Diana Mateus ◽  
Christian Wachinger ◽  
Selen Atasoy ◽  
Loren Schwarz ◽  
Nassir Navab

Computer aided diagnosis is often confronted with processing and analyzing high dimensional data. One alternative to deal with such data is dimensionality reduction. This chapter focuses on manifold learning methods to create low dimensional data representations adapted to a given application. From pairwise non-linear relations between neighboring data-points, manifold learning algorithms first approximate the low dimensional manifold where data lives with a graph; then, they find a non-linear map to embed this graph into a low dimensional space. Since the explicit pairwise relations and the neighborhood system can be designed according to the application, manifold learning methods are very flexible and allow easy incorporation of domain knowledge. The authors describe different assumptions and design elements that are crucial to building successful low dimensional data representations with manifold learning for a variety of applications. In particular, they discuss examples for visualization, clustering, classification, registration, and human-motion modeling.


2009 ◽  
Vol 06 (02) ◽  
pp. 265-289 ◽  
Author(s):  
BEHZAD DARIUSH ◽  
MICHAEL GIENGER ◽  
ARJUN ARUMBAKKAM ◽  
YOUDING ZHU ◽  
BING JIAN ◽  
...  

Transferring motion from a human demonstrator to a humanoid robot is an important step toward developing robots that are easily programmable and that can replicate or learn from observed human motion. The so called motion retargeting problem has been well studied and several off-line solutions exist based on optimization approaches that rely on pre-recorded human motion data collected from a marker-based motion capture system. From the perspective of human robot interaction, there is a growing interest in online motion transfer, particularly without using markers. Such requirements have placed stringent demands on retargeting algorithms and limited the potential use of off-line and pre-recorded methods. To address these limitations, we present an online task space control theoretic retargeting formulation to generate robot joint motions that adhere to the robot's joint limit constraints, joint velocity constraints and self-collision constraints. The inputs to the proposed method include low dimensional normalized human motion descriptors, detected and tracked using a vision based key-point detection and tracking algorithm. The proposed vision algorithm does not rely on markers placed on anatomical landmarks, nor does it require special instrumentation or calibration. The current implementation requires a depth image sequence, which is collected from a single time of flight imaging device. The feasibility of the proposed approach is shown by means of online experimental results on the Honda humanoid robot — ASIMO.


2015 ◽  
Vol 2015 ◽  
pp. 1-21
Author(s):  
Wanyi Li ◽  
Jifeng Sun

This paper proposes a novel algorithm called low dimensional space incremental learning (LDSIL) to estimate the human motion in 3D from the silhouettes of human motion multiview images. The proposed algorithm takes the advantage of stochastic extremum memory adaptive searching (SEMAS) and incremental probabilistic dimension reduction model (IPDRM) to collect new high dimensional data samples. The high dimensional data samples can be selected to update the mapping from low dimensional space to high dimensional space, so that incremental learning can be achieved to estimate human motion from small amount of samples. Compared with three traditional algorithms, the proposed algorithm can make human motion estimation achieve a good performance in disambiguating silhouettes, overcoming the transient occlusion, and reducing estimation error.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xue Li

With the comprehensive development of national fitness, men, women, young, and old in China have joined the ranks of fitness. In order to increase the understanding of human movement, many researches have designed a lot of software or hardware to realize the analysis of human movement state. However, the recognition efficiency of various systems or platforms is not high, and the reduction ability is poor, so the recognition information processing system based on LSTM recurrent neural network under deep learning is proposed to collect and recognize human motion data. The system realizes the collection, processing, recognition, storage, and display of human motion data by constructing a three-layer human motion recognition information processing system and introduces LSTM recurrent neural network to optimize the recognition efficiency of the system, simplify the recognition process, and reduce the data missing rate caused by dimension reduction. Finally, we use the known dataset to train the model and analyze the performance and application effect of the system through the actual motion state. The final results show that the performance of LSTM recurrent neural network is better than the traditional algorithm, the accuracy can reach 0.980, and the confusion matrix results show that the recognition of human motion by the system can reach 85 points to the greatest extent. The test shows that the system can recognize and process the human movement data well, which has great application significance for future physical education and daily physical exercise.


Author(s):  
Therdsak Tangkuampien ◽  
David Suter

A marker-less motion capture system, based on machine learning, is proposed and tested. Pose information is inferred from images captured from multiple (as few as two) synchronized cameras. The central concept of which, we call: Kernel Subspace Mapping (KSM). The images-to-pose learning could be done with large numbers of images of a large variety of people (and with the ground truth poses accurately known). Of course, obtaining the ground-truth poses could be problematic. Here we choose to use synthetic data (both for learning and for, at least some of, testing). The system needs to generalizes well to novel inputs:unseen poses (not in the training database) and unseen actors. For the learning we use a generic and relatively low fidelity computer graphic model and for testing we sometimes use a more accurate model (made to resemble the first author). What makes machine learning viable for human motion capture is that a high percentage of human motion is coordinated. Indeed, it is now relatively well known that there is large redundancy in the set of possible images of a human (these images form som sort of relatively smooth lower dimensional manifold in the huge dimensional space of all possible images) and in the set of pose angles (again, a low dimensional and smooth sub-manifold of the moderately high dimensional space of all possible joint angles). KSM, is based on the KPCA (Kernel PCA) algorithm, which is costly. We show that the Greedy Kernel PCA (GKPCA) algorithm can be used to speed up KSM, with relatively minor modifications. At the core, then, is two KPCA’s (or two GKPCA’s) - one for the learning of pose manifold and one for the learning image manifold. Then we use a modification of Local Linear Embedding (LLE) to bridge between pose and image manifolds.


Author(s):  
P Loslever

This paper discusses two main problems of human motion data: their uncertainty and analysis. Considering the first point, a simulating method is proposed to assess the error. This approach is applied to a joint angle, computed from the positions of points obtained through a three-dimensional video-computer system. Considering the second point, a multi-variate methodology based on appropriate data coding and the correspondence factor analysis method is proposed. The outcomes of this allow the relations within the time windows of the variable set, the distances within the observation set and the correspondences between these two sets to be shown graphically. To illustrate this approach, two examples are considered: the analysis of the low back-pelvis angle in an ergonomical study about the sitting posture and the analysis of joint angles in the gait.


Author(s):  
Seungmoon Song ◽  
Łukasz Kidziński ◽  
Xue Bin Peng ◽  
Carmichael Ong ◽  
Jennifer Hicks ◽  
...  

AbstractModeling human motor control and predicting how humans will move in novel environments is a grand scientific challenge. Researchers in the fields of biomechanics and motor control have proposed and evaluated motor control models via neuromechanical simulations, which produce physically correct motions of a musculoskeletal model. Typically, researchers have developed control models that encode physiologically plausible motor control hypotheses and compared the resulting simulation behaviors to measurable human motion data. While such plausible control models were able to simulate and explain many basic locomotion behaviors (e.g. walking, running, and climbing stairs), modeling higher layer controls (e.g. processing environment cues, planning long-term motion strategies, and coordinating basic motor skills to navigate in dynamic and complex environments) remains a challenge. Recent advances in deep reinforcement learning lay a foundation for modeling these complex control processes and controlling a diverse repertoire of human movement; however, reinforcement learning has been rarely applied in neuromechanical simulation to model human control. In this paper, we review the current state of neuromechanical simulations, along with the fundamentals of reinforcement learning, as it applies to human locomotion. We also present a scientific competition and accompanying software platform, which we have organized to accelerate the use of reinforcement learning in neuromechanical simulations. This “Learn to Move” competition was an official competition at the NeurIPS conference from 2017 to 2019 and attracted over 1300 teams from around the world. Top teams adapted state-of-the-art deep reinforcement learning techniques and produced motions, such as quick turning and walk-to-stand transitions, that have not been demonstrated before in neuromechanical simulations without utilizing reference motion data. We close with a discussion of future opportunities at the intersection of human movement simulation and reinforcement learning and our plans to extend the Learn to Move competition to further facilitate interdisciplinary collaboration in modeling human motor control for biomechanics and rehabilitation research


2020 ◽  
Author(s):  
Seungmoon Song ◽  
Łukasz Kidziński ◽  
Xue Bin Peng ◽  
Carmichael Ong ◽  
Jennifer Hicks ◽  
...  

AbstractModeling human motor control and predicting how humans will move in novel environments is a grand scientific challenge. Despite advances in neuroscience techniques, it is still difficult to measure and interpret the activity of the millions of neurons involved in motor control. Thus, researchers in the fields of biomechanics and motor control have proposed and evaluated motor control models via neuromechanical simulations, which produce physically correct motions of a musculoskeletal model. Typically, researchers have developed control models that encode physiologically plausible motor control hypotheses and compared the resulting simulation behaviors to measurable human motion data. While such plausible control models were able to simulate and explain many basic locomotion behaviors (e.g. walking, running, and climbing stairs), modeling higher layer controls (e.g. processing environment cues, planning long-term motion strategies, and coordinating basic motor skills to navigate in dynamic and complex environments) remains a challenge. Recent advances in deep reinforcement learning lay a foundation for modeling these complex control processes and controlling a diverse repertoire of human movement; however, reinforcement learning has been rarely applied in neuromechanical simulation to model human control. In this paper, we review the current state of neuromechanical simulations, along with the fundamentals of reinforcement learning, as it applies to human locomotion. We also present a scientific competition and accompanying software platform, which we have organized to accelerate the use of reinforcement learning in neuromechanical simulations. This “Learn to Move” competition, which we have run annually since 2017 at the NeurIPS conference, has attracted over 1300 teams from around the world. Top teams adapted state-of-art deep reinforcement learning techniques to produce complex motions, such as quick turning and walk-to-stand transitions, that have not been demonstrated before in neuromechanical simulations without utilizing reference motion data. We close with a discussion of future opportunities at the intersection of human movement simulation and reinforcement learning and our plans to extend the Learn to Move competition to further facilitate interdisciplinary collaboration in modeling human motor control for biomechanics and rehabilitation research.


2011 ◽  
Vol 131 (3) ◽  
pp. 267-274 ◽  
Author(s):  
Noboru Tsunashima ◽  
Yuki Yokokura ◽  
Seiichiro Katsura

Sign in / Sign up

Export Citation Format

Share Document