Artificial Intelligence in Psychomotor Learning: Modeling Human Motion from Inertial Sensor Data

2019 ◽  
Vol 28 (04) ◽  
pp. 1940006 ◽  
Author(s):  
Olga C. Santos

Recent trends in educational technology focus on designing systems that can support students while learning complex psychomotor skills, such as those required when practicing sports and martial arts, dancing or playing a musical instrument. In this context, artificial intelligence can be key to personalize the development of these psychomotor skills by enabling the provision of effective feedback when the instructor is not present, or scaling up to a larger pool of students the feedback that an instructor would typically provide one-on-one. This paper presents the modeling of human motion gathered with inertial sensors aimed to offer a personalized support to students when learning complex psychomotor skills. In particular, when comparing learner data with those of an expert during the psychomotor learning process, artificial intelligence algorithms can allow to: (i) recognize specific motion learning units and (ii) assess learning performance in a motion unit. However, it seems that this field is still emerging, since when reviewed systematically, search results hardly included the motion modeling with artificial intelligence techniques of complex human activities measured with inertial sensors.

2020 ◽  
Author(s):  
Timo von Marcard

This thesis explores approaches to capture human motions with a small number of sensors. In the first part of this thesis an approach is presented that reconstructs the body pose from only six inertial sensors. Instead of relying on pre-recorded motion databases, a global optimization problem is solved to maximize the consistency of measurements and model over an entire recording sequence. The second part of this thesis deals with a hybrid approach to fuse visual information from a single hand-held camera with inertial sensor data. First, a discrete optimization problem is solved to automatically associate people detections in the video with inertial sensor data. Then, a global optimization problem is formulated to combine visual and inertial information. The propose  approach enables capturing of multiple interacting people and works even if many more people are visible in the camera image. In addition, systematic inertial sensor errors can be compensated, leading to a substantial in...


10.2196/13961 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13961
Author(s):  
Kim Sarah Sczuka ◽  
Lars Schwickert ◽  
Clemens Becker ◽  
Jochen Klenk

Background Falls are a common health problem, which in the worst cases can lead to death. To develop reliable fall detection algorithms as well as suitable prevention interventions, it is important to understand circumstances and characteristics of real-world fall events. Although falls are common, they are seldom observed, and reports are often biased. Wearable inertial sensors provide an objective approach to capture real-world fall signals. However, it is difficult to directly derive visualization and interpretation of body movements from the fall signals, and corresponding video data is rarely available. Objective The re-enactment method uses available information from inertial sensors to simulate fall events, replicate the data, validate the simulation, and thereby enable a more precise description of the fall event. The aim of this paper is to describe this method and demonstrate the validity of the re-enactment approach. Methods Real-world fall data, measured by inertial sensors attached to the lower back, were selected from the Fall Repository for the Design of Smart and Self-Adaptive Environments Prolonging Independent Living (FARSEEING) database. We focused on well-described fall events such as stumbling to be re-enacted under safe conditions in a laboratory setting. For the purposes of exemplification, we selected the acceleration signal of one fall event to establish a detailed simulation protocol based on identified postures and trunk movement sequences. The subsequent re-enactment experiments were recorded with comparable inertial sensor configurations as well as synchronized video cameras to analyze the movement behavior in detail. The re-enacted sensor signals were then compared with the real-world signals to adapt the protocol and repeat the re-enactment method if necessary. The similarity between the simulated and the real-world fall signals was analyzed with a dynamic time warping algorithm, which enables the comparison of two temporal sequences varying in speed and timing. Results A fall example from the FARSEEING database was used to show the feasibility of producing a similar sensor signal with the re-enactment method. Although fall events were heterogeneous concerning chronological sequence and curve progression, it was possible to reproduce a good approximation of the motion of a person’s center of mass during fall events based on the available sensor information. Conclusions Re-enactment is a promising method to understand and visualize the biomechanics of inertial sensor-recorded real-world falls when performed in a suitable setup, especially if video data is not available.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4119 ◽  
Author(s):  
Alexander Diete ◽  
Heiner Stuckenschmidt

In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6%.


Author(s):  
Edgar Charry ◽  
Daniel T.H. Lai

The use of inertial sensors to measure human movement has recently gained momentum with the advent of low cost micro-electro-mechanical systems (MEMS) technology. These sensors comprise accelerometer and gyroscopes which measure accelerations and angular velocities respectively. Secondary quantities such as displacement can be obtained by integration of these quantities, a method which presents challenging issues due to the problem of accumulative sensor errors. This chapter investigates the spectral evaluation of individual sensor errors and looks at the effectiveness of minimizing these errors using static digital filters. The primary focus is on the derivation of foot displacement data from inertial sensor measurements. The importance of foot, in particular toe displacement measurements is evident in the context of tripping and falling which are serious health concerns for the elderly. The Minimum Toe Clearance (MTC) as an important gait variable for falls-risk prediction and assessment, and therefore the measurement variable of interest. A brief sketch of the current devices employing accelerometers and gyroscopes is presented, highlighting the problems and difficulties reported in literature to achieve good precision. These have been mainly due to the presence of sensor errors and the error accumulative process employed in obtaining displacement measurements. The investigation first proceeds to identify the location of these sensor errors in the frequency domain using the Fast Fourier Transform (FFT) on raw inertial sensor data. The frequency content of velocity and displacement measurements obtained from integrating the inertial data using a well known strap-down method is then explored. These investigations revealed that large sensor errors occurred mainly in the low frequency spectrum while white noise exists in all frequency spectra. The efficacy of employing a band-pass filter to remove a large portion of these errors and their effect on the derived displacements is elaborated on. The cross-correlation of the FFT power spectra from a highly accurate optical measurement system and processed sensor data is used as a metric to evaluate the performance of the band-pass filter at several stages of the processing stage. The motivation is that a more fundamental method would require less computational demand and could lead to more efficient implementations in low-power and systems with limited resources, so that portable sensor based motion measurement system would provide a good degree of measurement accuracy.


2011 ◽  
Vol 44 (1) ◽  
pp. 7396-7401
Author(s):  
Takuma Akiduki ◽  
Zhong Zhang ◽  
Takashi Imamura ◽  
Tetsuo Miyake

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4675
Author(s):  
Daniel Gis ◽  
Nils Büscher ◽  
Christian Haubelt

Due to upcoming higher integration levels of microprocessors, the market of inertial sensors has changed in the last few years. Smart inertial sensors are becoming more and more important. This type of sensor offers the benefit of implementing sensor-processing tasks directly on the sensor hardware. The software development on such sensors is quite challenging. In this article, we propose an approach for using prerecorded sensor data during the development process to test and evaluate the functionality and timing of the sensor firmware in a repeatable and reproducible way on the actual hardware. Our proposed Sensor-in-the-Loop architecture enables the developer to inject sensor data during the debugging process directly into the sensor hardware in real time. As the timing becomes more critical in future smart sensor applications, we investigate the timing behavior of our approach with respect to timing and jitter. The implemented approach can inject data of three 3-DOF sensors at 1.6 kHz. Furthermore, the jitter shown in our proposed sampling method is at least three times lower than using real sensor data. To prove the statistical significance of our experiments, we use a Gage R&R analysis, extended by the assessment of confidence intervals of our data.


2020 ◽  
Vol 9 (1) ◽  
pp. 238-246
Author(s):  
Gan Wei Nie ◽  
Nurul Fathiah Ghazali ◽  
Norazman Shahar ◽  
Muhammad Amir As'ari

This paper proposes a stair walking detection via Long-short Term Memory (LSTM) network to prevent stair fall event happen by alerting caregiver for assistance as soon as possible. The tri-axial accelerometer and gyroscope data of five activities of daily living (ADLs) including stair walking is collected from 20 subjects with wearable inertial sensors on the left heel, right heel, chest, left wrist and right wrist. Several parameters which are window size, sensor deployment, number of hidden cell unit and LSTM architecture were varied in finding an optimized LSTM model for stair walking detection. As the result, the best model in detecting stair walking event that achieve 95.6% testing accuracy is double layered LSTM with 250 hidden cell units that is fed with data from all sensor locations with window size of 2 seconds. The result also shows that with similar detection model but fed with single sensor data, the model can achieve very good performance which is above 83.2%. It should be possible, therefore, to integrate the proposed detection model for fall prevention especially among patients or elderly in helping to alert the caregiver when stair walking event occur.


2012 ◽  
Vol 19 (1) ◽  
pp. 141-150 ◽  
Author(s):  
Paweł Pełczyński ◽  
Bartosz Ostrowski ◽  
Dariusz Rzeszotarski

Motion Vector Estimation of a Stereovision Camera with Inertial SensorsThe aim of the presented work was the development of a tracking algorithm for a stereoscopic camera setup equipped with an additional inertial sensor. The input of the algorithm consists of the image sequence, angular velocity and linear acceleration vectors measured by the inertial sensor. The main assumption of the project was fusion of data streams from both sources to obtain more accurate ego-motion estimation. An electronic module for recording the inertial sensor data was built. Inertial measurements allowed a coarse estimation of the image motion field that has reduced its search range by standard image-based methods. Continuous tracking of the camera motion has been achieved (including moments of image information loss). Results of the presented study are being implemented in a currently developed obstacle avoidance system for visually impaired pedestrians.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhesen Chu ◽  
Min Li

This article analyzes the method of reading data from inertial sensors. We introduce how to create a 3D scene and a 3D human body model and use inertial sensors to drive the 3D human body model. We capture the movement of the lower limbs of the human body when a small number of inertial sensor nodes are used. This paper introduces the idea of residual error into the deep LSTM network to solve the problem of gradient disappearance and gradient explosion. The main problem to be solved by wearable inertial sensor continuous human motion recognition is the modeling of time series. This paper chooses the LSTM network which can handle time series as well as the main frame. In order to reduce the gradient disappearance and gradient explosion problems in the deep LSTM network, the structure of the deep LSTM network is adjusted based on the residual learning idea. In this paper, a data acquisition method using a single inertial sensor fixed on the bottom of a badminton racket is proposed, and a window segmentation method based on the combination of sliding window and action window in real-time motion data stream is proposed. We performed feature extraction on the intercepted motion data and performed dimensionality reduction. An improved Deep Residual LSTM model is designed to identify six common swing movements. The first-level recognition algorithm uses the C4.5 decision tree algorithm to recognize the athlete’s gripping style, and the second-level recognition algorithm uses the random forest algorithm to recognize the swing movement. Simulation experiments confirmed that the proposed improved Deep Residual LSTM algorithm has an accuracy of over 90.0% for the recognition of six common swing movements.


Author(s):  
Umama Ahmed ◽  
Olcay Sahin ◽  
Mecit Cetin

For the past few years, several studies have focused on identifying a vehicle’s trajectory with smartphone data. However, these studies predominantly used GPS coordinate information for that purpose. Considering the known limitations of GPS, such as connectivity issues at urban canyons and underpasses, low precision of localization, and high power consumption of smartphones while GPS is in use, this paper focuses on developing alternative methods for identifying a vehicle’s trajectory at an intersection with sensor data other than GPS to minimize GPS dependency. In particular, accelerometer and gyroscope data collected with smartphone inertial sensors and speed data collected with an onboard diagnostics device are used to develop algorithms for maneuver (i.e., left and right turns and through) and trip direction identification at an intersection. In addition, techniques for noise removal and orientation correction from raw inertial sensor data are described. The effectiveness of the method for trajectory identification is assessed with collected field data. Results demonstrate that the developed method is effective in identifying a vehicle’s trajectory at an intersection. Overall, this research shows the feasibility of using alternative sensor data for trajectory identification and thus eliminating the need for continuous GPS connectivity.


Sign in / Sign up

Export Citation Format

Share Document