scholarly journals Real-Time Prediction of Joint Forces by Motion Capture and Machine Learning

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6933
Author(s):  
Georgios Giarmatzis ◽  
Evangelia I. Zacharaki ◽  
Konstantinos Moustakas

Conventional biomechanical modelling approaches involve the solution of large systems of equations that encode the complex mathematical representation of human motion and skeletal structure. To improve stability and computational speed, being a common bottleneck in current approaches, we apply machine learning to train surrogate models and to predict in near real-time, previously calculated medial and lateral knee contact forces (KCFs) of 54 young and elderly participants during treadmill walking in a speed range of 3 to 7 km/h. Predictions are obtained by fusing optical motion capture and musculoskeletal modeling-derived kinematic and force variables, into regression models using artificial neural networks (ANNs) and support vector regression (SVR). Training schemes included either data from all subjects (LeaveTrialsOut) or only from a portion of them (LeaveSubjectsOut), in combination with inclusion of ground reaction forces (GRFs) in the dataset or not. Results identify ANNs as the best-performing predictor of KCFs, both in terms of Pearson R (0.89–0.98 for LeaveTrialsOut and 0.45–0.85 for LeaveSubjectsOut) and percentage normalized root mean square error (0.67–2.35 for LeaveTrialsOut and 1.6–5.39 for LeaveSubjectsOut). When GRFs were omitted from the dataset, no substantial decrease in prediction power of both models was observed. Our findings showcase the strength of ANNs to predict simultaneously multi-component KCF during walking at different speeds—even in the absence of GRFs—particularly applicable in real-time applications that make use of knee loading conditions to guide and treat patients.

2021 ◽  
Author(s):  
Patrick Slade ◽  
Ayman Habib ◽  
Jennifer L. Hicks ◽  
Scott L. Delp

AbstractAnalyzing human motion is essential for diagnosing movement disorders and guiding rehabilitation interventions for conditions such as osteoarthritis, stroke, and Parkinson’s disease. Optical motion capture systems are the current standard for estimating kinematics but require expensive equipment located in a predefined space. While wearable sensor systems can estimate kinematics in any environment, existing systems are generally less accurate than optical motion capture. Further, many wearable sensor systems require a computer in close proximity and rely on proprietary software, making it difficult for researchers to reproduce experimental findings. Here, we present OpenSenseRT, an open-source and wearable system that estimates upper and lower extremity kinematics in real time by using inertial measurement units and a portable microcontroller. We compared the OpenSenseRT system to optical motion capture and found an average RMSE of 4.4 degrees across 5 lower-limb joint angles during three minutes of walking (n = 5) and an average RMSE of 5.6 degrees across 8 upper extremity joint angles during a Fugl-Meyer task (n = 5). The open-source software and hardware are scalable, tracking between 1 and 14 body segments, with one sensor per segment. Kinematics are estimated in real-time using a musculoskeletal model and inverse kinematics solver. The computation frequency, depends on the number of tracked segments, but is sufficient for real-time measurement for many tasks of interest; for example, the system can track up to 7 segments at 30 Hz in real-time. The system uses off-the-shelf parts costing approximately $100 USD plus $20 for each tracked segment. The OpenSenseRT system is accurate, low-cost, and simple to replicate, enabling movement analysis in labs, clinics, homes, and free-living settings.


2020 ◽  
Vol 7 (7) ◽  
pp. 2103
Author(s):  
Yoshihisa Matsunaga ◽  
Ryoichi Nakamura

Background: Abdominal cavity irrigation is a more minimally invasive surgery than that using a gas. Minimally invasive surgery improves the quality of life of patients; however, it demands higher skills from the doctors. Therefore, the study aimed to reduce the burden by assisting and automating the hemostatic procedure a highly frequent procedure by taking advantage of the clearness of the endoscopic images and continuous bleeding point observations in the liquid. We aimed to construct a method for detecting organs, bleeding sites, and hemostasis regions.Methods: We developed a method to perform real-time detection based on machine learning using laparoscopic videos. Our training dataset was prepared from three experiments in pigs. Linear support vector machine was applied using new color feature descriptors. In the verification of the accuracy of the classifier, we performed five-part cross-validation. Classification processing time was measured to verify the real-time property. Furthermore, we visualized the time series class change of the surgical field during the hemostatic procedure.Results: The accuracy of our classifier was 98.3% and the processing cost to perform real-time was enough. Furthermore, it was conceivable to quantitatively indicate the completion of the hemostatic procedure based on the changes in the bleeding region by ablation and the hemostasis regions by tissue coagulation.Conclusions: The organs, bleeding sites, and hemostasis regions classification was useful for assisting and automating the hemostatic procedure in the liquid. Our method can be adapted to more hemostatic procedures. 


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Fei Tan ◽  
Xiaoqing Xie

Human motion recognition based on inertial sensor is a new research direction in the field of pattern recognition. It carries out preprocessing, feature selection, and feature selection by placing inertial sensors on the surface of the human body. Finally, it mainly classifies and recognizes the extracted features of human action. There are many kinds of swing movements in table tennis. Accurately identifying these movement modes is of great significance for swing movement analysis. With the development of artificial intelligence technology, human movement recognition has made many breakthroughs in recent years, from machine learning to deep learning, from wearable sensors to visual sensors. However, there is not much work on movement recognition for table tennis, and the methods are still mainly integrated into the traditional field of machine learning. Therefore, this paper uses an acceleration sensor as a motion recording device for a table tennis disc and explores the three-axis acceleration data of four common swing motions. Traditional machine learning algorithms (decision tree, random forest tree, and support vector) are used to classify the swing motion, and a classification algorithm based on the idea of integration is designed. Experimental results show that the ensemble learning algorithm developed in this paper is better than the traditional machine learning algorithm, and the average recognition accuracy is 91%.


1999 ◽  
Vol 8 (2) ◽  
pp. 187-203 ◽  
Author(s):  
Tom Molet ◽  
Ronan Boulic ◽  
Daniel Thalmann

Motion-capture techniques are rarely based on orientation measurements for two main reasons: (1) optical motion-capture systems are designed for tracking object position rather than their orientation (which can be deduced from several trackers), (2) known animation techniques, like inverse kinematics or geometric algorithms, require position targets constantly, but orientation inputs only occasionally. We propose a complete human motion-capture technique based essentially on orientation measurements. The position measurement is used only for recovering the global position of the performer. This method allows fast tracking of human gestures for interactive applications as well as high rate recording. Several motion-capture optimizations, including the multijoint technique, improve the posture realism. This work is well suited for magnetic-based systems that rely more on orientation registration (in our environment) than position measurements that necessitate difficult system calibration.


Author(s):  
Zhi Zhang ◽  
Dagang Wang ◽  
Jianxiu Qiu ◽  
Jinxin Zhu ◽  
Tingli Wang

AbstractThe Global Precipitation Measurement (GPM) mission provides satellite precipitation products with an unprecedented spatio-temporal resolution and spatial coverage. However, its near-real-time (NRT) product still suffers from low accuracy. This study aims to improve the early run of the Integrated Multi-satellitE Retrievals for GPM (IMERG) by using four machine learning approaches, i.e., support vector machine (SVM), random forest (RF), artificial neural network (ANN), and Extreme Gradient Boosting (XGB). The cloud properties are selected as the predictors in addition to the original IMERG in these approaches. All the four approaches show similar improvement, with 53%-60% reduction of root-mean-square error (RMSE) compared with the original IMERG in a humid area, i.e., the Dongjiang River Basin (DJR) in southeastern China. The improvements are even greater in a semi-arid area, i.e., the Fenhe River Basin (FHR) in central China, the RMSE reduction ranges from 63%-66%. The products generated by the machine learning methods performs similarly to or even outperform than the final run of IMERG. Feature importance analysis, a technique to evaluate input features based on how useful they are in predicting a target variable, indicates that the cloud height and the brightness temperature are the most useful information in improving satellite precipitation products, followed by the atmospheric reflectivity and the surface temperature. This study shows that a more accurate NRT precipitation product can be produced by combining machine learning approaches and cloud information, which is of importance for hydrological applications that requires NRT precipitation information including flood monitoring.


2021 ◽  
Author(s):  
S. H. Al Gharbi ◽  
A. A. Al-Majed ◽  
A. Abdulraheem ◽  
S. Patil ◽  
S. M. Elkatatny

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote areas and unconventional environments. This raised the complexity of drilling operations, which were already challenging and complex. To adapt, drilling companies expanded their use of the real-time operation center (RTOC) concept, in which real-time drilling data are transmitted from remote sites to companies’ headquarters. In RTOC, groups of subject matter experts monitor the drilling live and provide real-time advice to improve operations. With the increase of drilling operations, processing the volume of generated data is beyond a human's capability, limiting the RTOC impact on certain components of drilling operations. To overcome this limitation, artificial intelligence and machine learning (AI/ML) technologies were introduced to monitor and analyze the real-time drilling data, discover hidden patterns, and provide fast decision-support responses. AI/ML technologies are data-driven technologies, and their quality relies on the quality of the input data: if the quality of the input data is good, the generated output will be good; if not, the generated output will be bad. Unfortunately, due to the harsh environments of drilling sites and the transmission setups, not all of the drilling data is good, which negatively affects the AI/ML results. The objective of this paper is to utilize AI/ML technologies to improve the quality of real-time drilling data. The paper fed a large real-time drilling dataset, consisting of over 150,000 raw data points, into Artificial Neural Network (ANN), Support Vector Machine (SVM) and Decision Tree (DT) models. The models were trained on the valid and not-valid datapoints. The confusion matrix was used to evaluate the different AI/ML models including different internal architectures. Despite the slowness of ANN, it achieved the best result with an accuracy of 78%, compared to 73% and 41% for DT and SVM, respectively. The paper concludes by presenting a process for using AI technology to improve real-time drilling data quality. To the author's knowledge based on literature in the public domain, this paper is one of the first to compare the use of multiple AI/ML techniques for quality improvement of real-time drilling data. The paper provides a guide for improving the quality of real-time drilling data.


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 426
Author(s):  
I. Concepción Aranda-Valera ◽  
Antonio Cuesta-Vargas ◽  
Juan L. Garrido-Castro ◽  
Philip V. Gardiner ◽  
Clementina López-Medina ◽  
...  

Portable inertial measurement units (IMUs) are beginning to be used in human motion analysis. These devices can be useful for the evaluation of spinal mobility in individuals with axial spondyloarthritis (axSpA). The objectives of this study were to assess (a) concurrent criterion validity in individuals with axSpA by comparing spinal mobility measured by an IMU sensor-based system vs. optical motion capture as the reference standard; (b) discriminant validity comparing mobility with healthy volunteers; (c) construct validity by comparing mobility results with relevant outcome measures. A total of 70 participants with axSpA and 20 healthy controls were included. Individuals with axSpA completed function and activity questionnaires, and their mobility was measured using conventional metrology for axSpA, an optical motion capture system, and an IMU sensor-based system. The UCOASMI, a metrology index based on measures obtained by motion capture, and the IUCOASMI, the same index using IMU measures, were also calculated. Descriptive and inferential analyses were conducted to show the relationships between outcome measures. There was excellent agreement (ICC > 0.90) between both systems and a significant correlation between the IUCOASMI and conventional metrology (r = 0.91), activity (r = 0.40), function (r = 0.62), quality of life (r = 0.55) and structural change (r = 0.76). This study demonstrates the validity of an IMU system to evaluate spinal mobility in axSpA. These systems are more feasible than optical motion capture systems, and they could be useful in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document