scholarly journals Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition Using Deep Convolutional Neural Networks

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3803 ◽  
Author(s):  
Abayomi Otebolaku ◽  
Timibloudi Enamamu ◽  
Ali Alfoudi ◽  
Augustine Ikpehai ◽  
Jims Marchang ◽  
...  

With the widespread use of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose augmentation of the time series signals from inertial sensors with signals from ambient sensing to train Deep Convolutional Neural Network (DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertial and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors, such as environmental noise level and illumination, with an overall improvement of 5.3% accuracy.

Author(s):  
Abayomi Otebolaku ◽  
Timibloudi Enamamu ◽  
Ali Alfouldi ◽  
Augustine Ikpehai ◽  
Jims Marchang

With the widespread of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose to augment the time series signals from inertia sensors with signals from ambient sensing to train deep convolutional neural networks (DCNN) models. DCNN provides the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertia and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors such as environment noise level and illumination, with an overall improvement of 5.3% accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1950
Author(s):  
David Gualda ◽  
María Carmen Pérez-Rubio ◽  
Jesús Ureña ◽  
Sergio Pérez-Bachiller ◽  
José Manuel Villadangos ◽  
...  

Indoor positioning remains a challenge and, despite much research and development carried out in the last decade, there is still no standard as with the Global Navigation Satellite Systems (GNSS) outdoors. This paper presents an indoor positioning system called LOCATE-US with adjustable granularity for use with commercial mobile devices, such as smartphones or tablets. LOCATE-US is privacy-oriented and allows every device to compute its own position by fusing ultrasonic, inertial sensor measurements and map information. Ultrasonic Local Positioning Systems (U-LPS) based on encoded signals are placed in critical zones that require an accuracy below a few decimeters to correct the accumulated drift errors of the inertial measurements. These systems are well suited to work at room level as walls confine acoustic waves inside. To avoid audible artifacts, the U-LPS emission is set at 41.67 kHz, and an ultrasonic acquisition module with reduced dimensions is attached to the mobile device through the USB port to capture signals. Processing in the mobile device involves an improved Time Differences of Arrival (TDOA) estimation that is fused with the measurements from an external inertial sensor to obtain real-time location and trajectory display at a 10 Hz rate. Graph-matching has also been included, considering available prior knowledge about the navigation scenario. This kind of device is an adequate platform for Location-Based Services (LBS), enabling applications such as augmented reality, guiding applications, or people monitoring and assistance. The system architecture can easily incorporate new sensors in the future, such as UWB, RFiD or others.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhesen Chu ◽  
Min Li

This article analyzes the method of reading data from inertial sensors. We introduce how to create a 3D scene and a 3D human body model and use inertial sensors to drive the 3D human body model. We capture the movement of the lower limbs of the human body when a small number of inertial sensor nodes are used. This paper introduces the idea of residual error into the deep LSTM network to solve the problem of gradient disappearance and gradient explosion. The main problem to be solved by wearable inertial sensor continuous human motion recognition is the modeling of time series. This paper chooses the LSTM network which can handle time series as well as the main frame. In order to reduce the gradient disappearance and gradient explosion problems in the deep LSTM network, the structure of the deep LSTM network is adjusted based on the residual learning idea. In this paper, a data acquisition method using a single inertial sensor fixed on the bottom of a badminton racket is proposed, and a window segmentation method based on the combination of sliding window and action window in real-time motion data stream is proposed. We performed feature extraction on the intercepted motion data and performed dimensionality reduction. An improved Deep Residual LSTM model is designed to identify six common swing movements. The first-level recognition algorithm uses the C4.5 decision tree algorithm to recognize the athlete’s gripping style, and the second-level recognition algorithm uses the random forest algorithm to recognize the swing movement. Simulation experiments confirmed that the proposed improved Deep Residual LSTM algorithm has an accuracy of over 90.0% for the recognition of six common swing movements.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7331
Author(s):  
Patrick Blauberger ◽  
Alexander Horsch ◽  
Martin Lames

This study describes a method for extracting the stride parameter ground contact time (GCT) from inertial sensor signals in sprinting. Five elite athletes were equipped with inertial measurement units (IMU) on their ankles and performed 34 maximum 50 and 100-m sprints. The GCT of each step was estimated based on features of the recorded IMU signals. Additionally, a photo-electric measurement system covered a 50-m corridor of the track to generate ground truth data. This corridor was placed interchangeably at the first and the last 50-ms of the track. In total, 863 of 889 steps (97.08%) were detected correctly. On average, ground truth data were underestimated by 3.55 ms. The root mean square error of GCT was 7.97 ms. Error analyses showed that GCT at the beginning and the end of the sprint was classified with smaller errors. For single runs the visualization of step-by-step GCT was demonstrated as a new diagnostic instrument for sprint running. The results show the high potential of IMUs to provide the temporal parameter GCT for elite-level athletes.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ive Weygers ◽  
Manon Kok ◽  
Thomas Seel ◽  
Darshan Shah ◽  
Orçun Taylan ◽  
...  

AbstractSkin-attached inertial sensors are increasingly used for kinematic analysis. However, their ability to measure outside-lab can only be exploited after correctly aligning the sensor axes with the underlying anatomical axes. Emerging model-based inertial-sensor-to-bone alignment methods relate inertial measurements with a model of the joint to overcome calibration movements and sensor placement assumptions. It is unclear how good such alignment methods can identify the anatomical axes. Any misalignment results in kinematic cross-talk errors, which makes model validation and the interpretation of the resulting kinematics measurements challenging. This study provides an anatomically correct ground-truth reference dataset from dynamic motions on a cadaver. In contrast with existing references, this enables a true model evaluation that overcomes influences from soft-tissue artifacts, orientation and manual palpation errors. This dataset comprises extensive dynamic movements that are recorded with multimodal measurements including trajectories of optical and virtual (via computed tomography) anatomical markers, reference kinematics, inertial measurements, transformation matrices and visualization tools. The dataset can be used either as a ground-truth reference or to advance research in inertial-sensor-to-bone-alignment.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4033
Author(s):  
Peng Ren ◽  
Fatemeh Elyasi ◽  
Roberto Manduchi

Pedestrian tracking systems implemented in regular smartphones may provide a convenient mechanism for wayfinding and backtracking for people who are blind. However, virtually all existing studies only considered sighted participants, whose gait pattern may be different from that of blind walkers using a long cane or a dog guide. In this contribution, we present a comparative assessment of several algorithms using inertial sensors for pedestrian tracking, as applied to data from WeAllWalk, the only published inertial sensor dataset collected indoors from blind walkers. We consider two situations of interest. In the first situation, a map of the building is not available, in which case we assume that users walk in a network of corridors intersecting at 45° or 90°. We propose a new two-stage turn detector that, combined with an LSTM-based step counter, can robustly reconstruct the path traversed. We compare this with RoNIN, a state-of-the-art algorithm based on deep learning. In the second situation, a map is available, which provides a strong prior on the possible trajectories. For these situations, we experiment with particle filtering, with an additional clustering stage based on mean shift. Our results highlight the importance of training and testing inertial odometry systems for assisted navigation with data from blind walkers.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5167
Author(s):  
Nicky Baker ◽  
Claire Gough ◽  
Susan J. Gordon

Compared to laboratory equipment inertial sensors are inexpensive and portable, permitting the measurement of postural sway and balance to be conducted in any setting. This systematic review investigated the inter-sensor and test-retest reliability, and concurrent and discriminant validity to measure static and dynamic balance in healthy adults. Medline, PubMed, Embase, Scopus, CINAHL, and Web of Science were searched to January 2021. Nineteen studies met the inclusion criteria. Meta-analysis was possible for reliability studies only and it was found that inertial sensors are reliable to measure static standing eyes open. A synthesis of the included studies shows moderate to good reliability for dynamic balance. Concurrent validity is moderate for both static and dynamic balance. Sensors discriminate old from young adults by amplitude of mediolateral sway, gait velocity, step length, and turn speed. Fallers are discriminated from non-fallers by sensor measures during walking, stepping, and sit to stand. The accuracy of discrimination is unable to be determined conclusively. Using inertial sensors to measure postural sway in healthy adults provides real-time data collected in the natural environment and enables discrimination between fallers and non-fallers. The ability of inertial sensors to identify differences in postural sway components related to altered performance in clinical tests can inform targeted interventions for the prevention of falls and near falls.


Sign in / Sign up

Export Citation Format

Share Document