Adaptive Robust Processing of Inertial Sensor Signals

Author(s):  
A.V. Chernodarov ◽  
P.S. Gorshkov ◽  
A.P. Patrikeev
Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4946
Author(s):  
Nicolas Lemieux ◽  
Rita Noumeir

In the domain of human action recognition, existing works mainly focus on using RGB, depth, skeleton and infrared data for analysis. While these methods have the benefit of being non-invasive, they can only be used within limited setups, are prone to issues such as occlusion and often need substantial computational resources. In this work, we address human action recognition through inertial sensor signals, which have a vast quantity of practical applications in fields such as sports analysis and human-machine interfaces. For that purpose, we propose a new learning framework built around a 1D-CNN architecture, which we validated by achieving very competitive results on the publicly available UTD-MHAD dataset. Moreover, the proposed method provides some answers to two of the greatest challenges currently faced by action recognition algorithms, which are (1) the recognition of high-level activities and (2) the reduction of their computational cost in order to make them accessible to embedded devices. Finally, this paper also investigates the tractability of the features throughout the proposed framework, both in time and duration, as we believe it could play an important role in future works in order to make the solution more intelligible, hardware-friendly and accurate.


2018 ◽  
Vol 11 (5) ◽  
pp. 25-36
Author(s):  
Byeong Jeong Kim ◽  
Jae-il Jung ◽  
Seop Hyeong Park

Author(s):  
Abayomi Otebolaku ◽  
Timibloudi Enamamu ◽  
Ali Alfouldi ◽  
Augustine Ikpehai ◽  
Jims Marchang

With the widespread of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose to augment the time series signals from inertia sensors with signals from ambient sensing to train deep convolutional neural networks (DCNN) models. DCNN provides the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertia and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors such as environment noise level and illumination, with an overall improvement of 5.3% accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7331
Author(s):  
Patrick Blauberger ◽  
Alexander Horsch ◽  
Martin Lames

This study describes a method for extracting the stride parameter ground contact time (GCT) from inertial sensor signals in sprinting. Five elite athletes were equipped with inertial measurement units (IMU) on their ankles and performed 34 maximum 50 and 100-m sprints. The GCT of each step was estimated based on features of the recorded IMU signals. Additionally, a photo-electric measurement system covered a 50-m corridor of the track to generate ground truth data. This corridor was placed interchangeably at the first and the last 50-ms of the track. In total, 863 of 889 steps (97.08%) were detected correctly. On average, ground truth data were underestimated by 3.55 ms. The root mean square error of GCT was 7.97 ms. Error analyses showed that GCT at the beginning and the end of the sprint was classified with smaller errors. For single runs the visualization of step-by-step GCT was demonstrated as a new diagnostic instrument for sprint running. The results show the high potential of IMUs to provide the temporal parameter GCT for elite-level athletes.


Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3803 ◽  
Author(s):  
Abayomi Otebolaku ◽  
Timibloudi Enamamu ◽  
Ali Alfoudi ◽  
Augustine Ikpehai ◽  
Jims Marchang ◽  
...  

With the widespread use of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose augmentation of the time series signals from inertial sensors with signals from ambient sensing to train Deep Convolutional Neural Network (DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertial and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors, such as environmental noise level and illumination, with an overall improvement of 5.3% accuracy.


2016 ◽  
Author(s):  
Daniel Paczesny ◽  
Adrian Ratajczyk ◽  
Zbigniew M. Wawrzyniak ◽  
Grzegorz Tarapata

Author(s):  
Ioana-Raluca Edu ◽  
Teodor Lucian Grigorie ◽  
Felix-Constantin Adochiei ◽  
Constantin Rotaru ◽  
Nicolae Jula

Sign in / Sign up

Export Citation Format

Share Document