scholarly journals Motion Inference Using Sparse Inertial Sensors, Self-Supervised Learning, and a New Dataset of Unscripted Human Motion

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6330
Author(s):  
Jack H. Geissinger ◽  
Alan T. Asbeck

In recent years, wearable sensors have become common, with possible applications in biomechanical monitoring, sports and fitness training, rehabilitation, assistive devices, or human-computer interaction. Our goal was to achieve accurate kinematics estimates using a small number of sensors. To accomplish this, we introduced a new dataset (the Virginia Tech Natural Motion Dataset) of full-body human motion capture using XSens MVN Link that contains more than 40 h of unscripted daily life motion in the open world. Using this dataset, we conducted self-supervised machine learning to do kinematics inference: we predicted the complete kinematics of the upper body or full body using a reduced set of sensors (3 or 4 for the upper body, 5 or 6 for the full body). We used several sequence-to-sequence (Seq2Seq) and Transformer models for motion inference. We compared the results using four different machine learning models and four different configurations of sensor placements. Our models produced mean angular errors of 10–15 degrees for both the upper body and full body, as well as worst-case errors of less than 30 degrees. The dataset and our machine learning code are freely available.

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1557 ◽  
Author(s):  
Ilaria Conforti ◽  
Ilaria Mileti ◽  
Zaccaria Del Prete ◽  
Eduardo Palermo

Ergonomics evaluation through measurements of biomechanical parameters in real time has a great potential in reducing non-fatal occupational injuries, such as work-related musculoskeletal disorders. Assuming a correct posture guarantees the avoidance of high stress on the back and on the lower extremities, while an incorrect posture increases spinal stress. Here, we propose a solution for the recognition of postural patterns through wearable sensors and machine-learning algorithms fed with kinematic data. Twenty-six healthy subjects equipped with eight wireless inertial measurement units (IMUs) performed manual material handling tasks, such as lifting and releasing small loads, with two postural patterns: correctly and incorrectly. Measurements of kinematic parameters, such as the range of motion of lower limb and lumbosacral joints, along with the displacement of the trunk with respect to the pelvis, were estimated from IMU measurements through a biomechanical model. Statistical differences were found for all kinematic parameters between the correct and the incorrect postures (p < 0.01). Moreover, with the weight increase of load in the lifting task, changes in hip and trunk kinematics were observed (p < 0.01). To automatically identify the two postures, a supervised machine-learning algorithm, a support vector machine, was trained, and an accuracy of 99.4% (specificity of 100%) was reached by using the measurements of all kinematic parameters as features. Meanwhile, an accuracy of 76.9% (specificity of 76.9%) was reached by using the measurements of kinematic parameters related to the trunk body segment.


Author(s):  
Chi Cuong Vu ◽  
Jooyong Kim

Wearable sensors for human physiological monitoring have attracted tremendous interest from researchers in recent years. However, most of the research was only done in simple trials without any significant analytical algorithms. This study provides a way of recognizing human motion by combining textile stretch sensors based on single-walled carbon nanotubes (SWCNTs) and spandex fabric (PET/SP) and machine learning algorithms in a realistic applications. In the study, the performance of the system will be evaluated by identification rate and accuracy of the motion standardized. This research aims to provide a realistic motion sensing wearable products without unnecessary heavy and uncomfortable electronic devices.


Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1238 ◽  
Author(s):  
Irvin López-Nava ◽  
Angélica Muñoz-Meléndez

Action recognition is important for various applications, such as, ambient intelligence, smart devices, and healthcare. Automatic recognition of human actions in daily living environments, mainly using wearable sensors, is still an open research problem of the field of pervasive computing. This research focuses on extracting a set of features related to human motion, in particular the motion of the upper and lower limbs, in order to recognize actions in daily living environments, using time-series of joint orientation. Ten actions were performed by five test subjects in their homes: cooking, doing housework, eating, grooming, mouth care, ascending stairs, descending stairs, sitting, standing, and walking. The joint angles of the right upper limb and the left lower limb were estimated using information from five wearable inertial sensors placed on the back, right upper arm, right forearm, left thigh and left leg. The set features were used to build classifiers using three inference algorithms: Naive Bayes, K-Nearest Neighbours, and AdaBoost. The F- m e a s u r e average of classifying the ten actions of the three classifiers built by using the proposed set of features was 0.806 ( σ = 0.163).


IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 360-381
Author(s):  
Matthew T. O. Worsey ◽  
Hugo G. Espinosa ◽  
Jonathan B. Shepherd ◽  
David V. Thiel

Machine learning is a powerful tool for data classification and has been used to classify movement data recorded by wearable inertial sensors in general living and sports. Inertial sensors can provide valuable biofeedback in combat sports such as boxing; however, the use of such technology has not had a global uptake. If simple inertial sensor configurations can be used to automatically classify strike type, then cumbersome tasks such as video labelling can be bypassed and the foundation for automated workload monitoring of combat sport athletes is set. This investigation evaluates the classification performance of six different supervised machine learning models (tuned and untuned) when using two simple inertial sensor configurations (configuration 1—inertial sensor worn on both wrists; configuration 2—inertial sensor worn on both wrists and third thoracic vertebrae [T3]). When trained on one athlete, strike prediction accuracy was good using both configurations (sensor configuration 1 mean overall accuracy: 0.90 ± 0.12; sensor configuration 2 mean overall accuracy: 0.87 ± 0.09). There was no significant statistical difference in prediction accuracy between both configurations and tuned and untuned models (p > 0.05). Moreover, there was no significant statistical difference in computational training time for tuned and untuned models (p > 0.05). For sensor configuration 1, a support vector machine (SVM) model with a Gaussian rbf kernel performed the best (accuracy = 0.96), for sensor configuration 2, a multi-layered perceptron neural network (MLP-NN) model performed the best (accuracy = 0.98). Wearable inertial sensors can be used to accurately classify strike-type in boxing pad work, this means that cumbersome tasks such as video and notational analysis can be bypassed. Additionally, automated workload and performance monitoring of athletes throughout training camp is possible. Future investigations will evaluate the performance of this algorithm on a greater sample size and test the influence of impact window-size on prediction accuracy. Additionally, supervised machine learning models should be trained on data collected during sparring to see if high accuracy holds in a competition setting. This can help move closer towards automatic scoring in boxing.


2020 ◽  
Vol 16 (6) ◽  
pp. 155014772091156 ◽  
Author(s):  
Asif Iqbal ◽  
Farman Ullah ◽  
Hafeez Anwar ◽  
Ata Ur Rehman ◽  
Kiran Shah ◽  
...  

We propose to perform wearable sensors-based human physical activity recognition. This is further extended to an Internet-of-Things (IoT) platform which is based on a web-based application that integrates wearable sensors, smartphones, and activity recognition. To this end, a smartphone collects the data from wearable sensors and sends it to the server for processing and recognition of the physical activity. We collect a novel data set of 13 physical activities performed both indoor and outdoor. The participants are from both the genders where their number per activity varies. During these activities, the wearable sensors measure various body parameters via accelerometers, gyroscope, magnetometers, pressure, and temperature. These measurements and their statistical are then represented in features vectors that used to train and test supervised machine learning algorithms (classifiers) for activity recognition. On the given data set, we evaluate a number of widely known classifiers such random forests, support vector machine, and many others using the WEKA machine learning suite. Using the default settings of these classifiers in WEKA, we attain the highest overall classification accuracy of 90%. Consequently, such a recognition rate is encouraging, reliable, and effective to be used in the proposed platform.


Biosensors ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 284
Author(s):  
José Antonio Santoyo-Ramón ◽  
Eduardo Casilari ◽  
José Manuel Cano-García

In recent years, the popularity of wearable devices has fostered the investigation of automatic fall detection systems based on the analysis of the signals captured by transportable inertial sensors. Due to the complexity and variety of human movements, the detection algorithms that offer the best performance when discriminating falls from conventional Activities of Daily Living (ADLs) are those built on machine learning and deep learning mechanisms. In this regard, supervised machine learning binary classification methods have been massively employed by the related literature. However, the learning phase of these algorithms requires mobility patterns caused by falls, which are very difficult to obtain in realistic application scenarios. An interesting alternative is offered by One-Class Classifiers (OCCs), which can be exclusively trained and configured with movement traces of a single type (ADLs). In this paper, a systematic study of the performance of various typical OCCs (for diverse sets of input features and hyperparameters) is performed when applied to nine public repositories of falls and ADLs. The results show the potentials of these classifiers, which are capable of achieving performance metrics very similar to those of supervised algorithms (with values for the specificity and the sensitivity higher than 95%). However, the study warns of the need to have a wide variety of types of ADLs when training OCCs, since activities with a high degree of mobility can significantly increase the frequency of false alarms (ADLs identified as falls) if not considered in the data subsets used for training.


Author(s):  
Lejla Batina ◽  
Milena Djukanovic ◽  
Annelie Heuser ◽  
Stjepan Picek

AbstractSide-channel attacks (SCAs) are powerful attacks based on the information obtained from the implementation of cryptographic devices. Profiling side-channel attacks has received a lot of attention in recent years due to the fact that this type of attack defines the worst-case security assumptions. The SCA community realized that the same approach is actually used in other domains in the form of supervised machine learning. Consequently, some researchers started experimenting with different machine learning techniques and evaluating their effectiveness in the SCA context. More recently, we are witnessing an increase in the use of deep learning techniques in the SCA community with strong first results in side-channel analyses, even in the presence of countermeasures. In this chapter, we consider the evolution of profiling attacks, and subsequently we discuss the impacts they have made in the data preprocessing, feature engineering, and classification phases. We also speculate on the future directions and the best-case consequences for the security of small devices.


2010 ◽  
Vol 2 (2) ◽  
Author(s):  
Tao Liu ◽  
Yoshio Inoue ◽  
Kyoko Shibata

In conventional imitation control, optical tracking devices have been widely adopted to capture human motion and control robots in a laboratory environment. Wearable sensors are attracting extensive interest in the development of a lower-cost human-robot control system without constraints from stationary motion analysis devices. We propose an ambulatory human motion analysis system based on small inertial sensors to measure body segment orientations in real time. A new imitation control method was developed and applied to a biped robot using data of human joint angles obtained from a wearable sensor system. An experimental study was carried out to verify the method of synchronous imitation control for a biped robot. By comparing the results obtained from direct imitation control with an improved method based on a training algorithm, which includes a personal motion pattern, we found that the accuracy of imitation control was markedly improved and the tri-axial average errors of x-y- and z-moving displacements related to leg length were 12%, 8% and 4%, respectively. Experimental results support the feasibility of the proposed control method.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3109 ◽  
Author(s):  
Chi Vu ◽  
Jooyong Kim

Wearable sensors for human physiological monitoring have attracted tremendous interest from researchers in recent years. However, most of the research involved simple trials without any significant analytical algorithms. This study provides a way of recognizing human motion by combining textile stretch sensors based on single-walled carbon nanotubes (SWCNTs) and spandex fabric (PET/SP) and machine learning algorithms in a realistic application. In the study, the performance of the system will be evaluated by identification rate and accuracy of the motion standardized. This research aims to provide a realistic motion sensing wearable product without unnecessary heavy and uncomfortable electronic devices.


2020 ◽  
Vol 2020 ◽  
pp. 1-18 ◽  
Author(s):  
Juri Taborri ◽  
Justin Keogh ◽  
Anton Kos ◽  
Alessandro Santuz ◽  
Anton Umek ◽  
...  

In the last few decades, a number of technological developments have advanced the spread of wearable sensors for the assessment of human motion. These sensors have been also developed to assess athletes’ performance, providing useful guidelines for coaching, as well as for injury prevention. The data from these sensors provides key performance outcomes as well as more detailed kinematic, kinetic, and electromyographic data that provides insight into how the performance was obtained. From this perspective, inertial sensors, force sensors, and electromyography appear to be the most appropriate wearable sensors to use. Several studies were conducted to verify the feasibility of using wearable sensors for sport applications by using both commercially available and customized sensors. The present study seeks to provide an overview of sport biomechanics applications found from recent literature using wearable sensors, highlighting some information related to the used sensors and analysis methods. From the literature review results, it appears that inertial sensors are the most widespread sensors for assessing athletes’ performance; however, there still exist applications for force sensors and electromyography in this context. The main sport assessed in the studies was running, even though the range of sports examined was quite high. The provided overview can be useful for researchers, athletes, and coaches to understand the technologies currently available for sport performance assessment.


Sign in / Sign up

Export Citation Format

Share Document