DeblurExpandNet: image motion deblurring network aided by inertial sensor data

Author(s):  
Shuang Zhang ◽  
Ada Zhen ◽  
Robert L. Stevenson
2012 ◽  
Vol 19 (1) ◽  
pp. 141-150 ◽  
Author(s):  
Paweł Pełczyński ◽  
Bartosz Ostrowski ◽  
Dariusz Rzeszotarski

Motion Vector Estimation of a Stereovision Camera with Inertial SensorsThe aim of the presented work was the development of a tracking algorithm for a stereoscopic camera setup equipped with an additional inertial sensor. The input of the algorithm consists of the image sequence, angular velocity and linear acceleration vectors measured by the inertial sensor. The main assumption of the project was fusion of data streams from both sources to obtain more accurate ego-motion estimation. An electronic module for recording the inertial sensor data was built. Inertial measurements allowed a coarse estimation of the image motion field that has reduced its search range by standard image-based methods. Continuous tracking of the camera motion has been achieved (including moments of image information loss). Results of the presented study are being implemented in a currently developed obstacle avoidance system for visually impaired pedestrians.


2020 ◽  
Vol 53 (2) ◽  
pp. 15990-15997
Author(s):  
Felix Laufer ◽  
Michael Lorenz ◽  
Bertram Taetz ◽  
Gabriele Bleser

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


2021 ◽  
Vol 185 ◽  
pp. 282-291
Author(s):  
Nizam U. Ahamed ◽  
Kellen T. Krajewski ◽  
Camille C. Johnson ◽  
Adam J. Sterczala ◽  
Julie P. Greeves ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2480
Author(s):  
Isidoro Ruiz-García ◽  
Ismael Navarro-Marchal ◽  
Javier Ocaña-Wilhelmi ◽  
Alberto J. Palma ◽  
Pablo J. Gómez-López ◽  
...  

In skiing it is important to know how the skier accelerates and inclines the skis during the turn to avoid injuries and improve technique. The purpose of this pilot study with three participants was to develop and evaluate a compact, wireless, and low-cost system for detecting the inclination and acceleration of skis in the field based on inertial measurement units (IMU). To that end, a commercial IMU board was placed on each ski behind the skier boot. With the use of an attitude and heading reference system algorithm included in the sensor board, the orientation and attitude data of the skis were obtained (roll, pitch, and yaw) by IMU sensor data fusion. Results demonstrate that the proposed IMU-based system can provide reliable low-drifted data up to 11 min of continuous usage in the worst case. Inertial angle data from the IMU-based system were compared with the data collected by a video-based 3D-kinematic reference system to evaluate its operation in terms of data correlation and system performance. Correlation coefficients between 0.889 (roll) and 0.991 (yaw) were obtained. Mean biases from −1.13° (roll) to 0.44° (yaw) and 95% limits of agreements from 2.87° (yaw) to 6.27° (roll) were calculated for the 1-min trials. Although low mean biases were achieved, some limitations arose in the system precision for pitch and roll estimations that could be due to the low sampling rate allowed by the sensor data fusion algorithm and the initial zeroing of the gyroscope.


2011 ◽  
Vol 467-469 ◽  
pp. 108-113
Author(s):  
Xin Yu Li ◽  
Dong Yi Chen

Accurate tracking for Augmented Reality applications is a challenging task. Multi-sensors hybrid tracking generally provide more stable than the effect of the single visual tracking. This paper presents a new tightly-coupled hybrid tracking approach combining vision-based systems with inertial sensor. Based on multi-frequency sampling theory in the measurement data synchronization, a strong tracking filter (STF) is used to smooth sensor data and estimate position and orientation. Through adding time-varying fading factor to adaptively adjust the prediction error covariance of filter, this method improves the performance of tracking for fast moving targets. Experimental results show the efficiency and robustness of this proposed approach.


10.2196/13961 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13961
Author(s):  
Kim Sarah Sczuka ◽  
Lars Schwickert ◽  
Clemens Becker ◽  
Jochen Klenk

Background Falls are a common health problem, which in the worst cases can lead to death. To develop reliable fall detection algorithms as well as suitable prevention interventions, it is important to understand circumstances and characteristics of real-world fall events. Although falls are common, they are seldom observed, and reports are often biased. Wearable inertial sensors provide an objective approach to capture real-world fall signals. However, it is difficult to directly derive visualization and interpretation of body movements from the fall signals, and corresponding video data is rarely available. Objective The re-enactment method uses available information from inertial sensors to simulate fall events, replicate the data, validate the simulation, and thereby enable a more precise description of the fall event. The aim of this paper is to describe this method and demonstrate the validity of the re-enactment approach. Methods Real-world fall data, measured by inertial sensors attached to the lower back, were selected from the Fall Repository for the Design of Smart and Self-Adaptive Environments Prolonging Independent Living (FARSEEING) database. We focused on well-described fall events such as stumbling to be re-enacted under safe conditions in a laboratory setting. For the purposes of exemplification, we selected the acceleration signal of one fall event to establish a detailed simulation protocol based on identified postures and trunk movement sequences. The subsequent re-enactment experiments were recorded with comparable inertial sensor configurations as well as synchronized video cameras to analyze the movement behavior in detail. The re-enacted sensor signals were then compared with the real-world signals to adapt the protocol and repeat the re-enactment method if necessary. The similarity between the simulated and the real-world fall signals was analyzed with a dynamic time warping algorithm, which enables the comparison of two temporal sequences varying in speed and timing. Results A fall example from the FARSEEING database was used to show the feasibility of producing a similar sensor signal with the re-enactment method. Although fall events were heterogeneous concerning chronological sequence and curve progression, it was possible to reproduce a good approximation of the motion of a person’s center of mass during fall events based on the available sensor information. Conclusions Re-enactment is a promising method to understand and visualize the biomechanics of inertial sensor-recorded real-world falls when performed in a suitable setup, especially if video data is not available.


Sign in / Sign up

Export Citation Format

Share Document