scholarly journals DEEPLIO: DEEP LIDAR INERTIAL SENSOR FUSION FOR ODOMETRY ESTIMATION

Author(s):  
A. Javanmard-Gh. ◽  
D. Iwaszczuk ◽  
S. Roth

Abstract. Having a good estimate of the position and orientation of a mobile agent is essential for many application domains such as robotics, autonomous driving, and virtual and augmented reality. In particular, when using LiDAR and IMU sensors as the inputs, most existing methods still use classical filter-based fusion methods to achieve this task. In this work, we propose DeepLIO, a modular, end-to-end learning-based fusion framework for odometry estimation using LiDAR and IMU sensors. For this task, our network learns an appropriate fusion function by considering different modalities of its input latent feature vectors. We also formulate a loss function, where we combine both global and local pose information over an input sequence to improve the accuracy of the network predictions. Furthermore, we design three sub-networks with different modules and architectures derived from DeepLIO to analyze the effect of each sensory input on the task of odometry estimation. Experiments on the benchmark dataset demonstrate that DeepLIO outperforms existing learning-based and model-based methods regarding orientation estimation and shows a marginal position accuracy difference.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lei Yan ◽  
Qun Hao ◽  
Jie Cao ◽  
Rizvi Saad ◽  
Kun Li ◽  
...  

AbstractImage fusion integrates information from multiple images (of the same scene) to generate a (more informative) composite image suitable for human and computer vision perception. The method based on multiscale decomposition is one of the commonly fusion methods. In this study, a new fusion framework based on the octave Gaussian pyramid principle is proposed. In comparison with conventional multiscale decomposition, the proposed octave Gaussian pyramid framework retrieves more information by decomposing an image into two scale spaces (octave and interval spaces). Different from traditional multiscale decomposition with one set of detail and base layers, the proposed method decomposes an image into multiple sets of detail and base layers, and it efficiently retains high- and low-frequency information from the original image. The qualitative and quantitative comparison with five existing methods (on publicly available image databases) demonstrate that the proposed method has better visual effects and scores the highest in objective evaluation.


2021 ◽  
Vol 32 (4) ◽  
Author(s):  
Luigi D’Alfonso ◽  
Emanuele Garone ◽  
Pietro Muraca ◽  
Paolo Pugliese

AbstractIn this work, we face the problem of estimating the relative position and orientation of a camera and an object, when they are both equipped with inertial measurement units (IMUs), and the object exhibits a set of n landmark points with known coordinates (the so-called Pose estimation or PnP Problem). We present two algorithms that, fusing the information provided by the camera and the IMUs, solve the PnP problem with good accuracy. These algorithms only use the measurements given by IMUs’ inclinometers, as the magnetometers usually give inaccurate estimates of the Earth magnetic vector. The effectiveness of the proposed methods is assessed by numerical simulations and experimental tests. The results of the tests are compared with the most recent methods proposed in the literature.


Author(s):  
Sara Santos ◽  
Duarte Folgado ◽  
João Rodrigues ◽  
Nafiseh Mollaei ◽  
Carlos Fujão ◽  
...  

2006 ◽  
Vol 03 (03) ◽  
pp. 247-258 ◽  
Author(s):  
GANG SONG ◽  
SHUXIANG GUO

We propose a novel self-assisted rehabilitation system for the upper limbs of stroke patients. The system mainly includes two haptic devices (PHANTOM Omni), an advanced inertial sensor (MTx) and a computer. The inertial sensor is used to get the real-time orientation of one of the manipulator's hands, and the haptic devices are used to get the real-time positions of the manipulator's two hands and generate the appropriate forces that act on the two hands. We have built a virtual force model to get the accurate magnitude and orientation of the forces. With the change of the position and orientation of the manipulator's hands, the magnitude and orientation of the forces will change accordingly. The manipulator operates the styluses of the two haptic devices to control the position and orientation of the virtual object m, so that it can track the virtual object m′, which moves and rotates randomly in 4 degree-of-freedoms (DOF). It is expected to improve the agility and strength of manipulator's hands in this way. Furthermore, one hand can be used to assist the other one in the rehabilitation, so the self-assistance character is included in the system. The advantages of high safety, compaction and self-assistance will make the system suitable for home rehabilitation.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 208 ◽  
Author(s):  
Christina Salchow-Hömmen ◽  
Leonie Callies ◽  
Daniel Laidig ◽  
Markus Valtin ◽  
Thomas Schauer ◽  
...  

Objective real-time assessment of hand motion is crucial in many clinical applications including technically-assisted physical rehabilitation of the upper extremity. We propose an inertial-sensor-based hand motion tracking system and a set of dual-quaternion-based methods for estimation of finger segment orientations and fingertip positions. The proposed system addresses the specific requirements of clinical applications in two ways: (1) In contrast to glove-based approaches, the proposed solution maintains the sense of touch. (2) In contrast to previous work, the proposed methods avoid the use of complex calibration procedures, which means that they are suitable for patients with severe motor impairment of the hand. To overcome the limited significance of validation in lab environments with homogeneous magnetic fields, we validate the proposed system using functional hand motions in the presence of severe magnetic disturbances as they appear in realistic clinical settings. We show that standard sensor fusion methods that rely on magnetometer readings may perform well in perfect laboratory environments but can lead to more than 15 cm root-mean-square error for the fingertip distances in realistic environments, while our advanced method yields root-mean-square errors below 2 cm for all performed motions.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2870 ◽  
Author(s):  
Shaorong Xie ◽  
Chao Pan ◽  
Yaxin Peng ◽  
Ke Liu ◽  
Shihui Ying

In the field of autonomous driving, carriers are equipped with a variety of sensors, including cameras and LiDARs. However, the camera suffers from problems of illumination and occlusion, and the LiDAR encounters motion distortion, degenerate environment and limited ranging distance. Therefore, fusing the information from these two sensors deserves to be explored. In this paper, we propose a fusion network which robustly captures both the image and point cloud descriptors to solve the place recognition problem. Our contribution can be summarized as: (1) applying the trimmed strategy in the point cloud global feature aggregation to improve the recognition performance, (2) building a compact fusion framework which captures both the robust representation of the image and 3D point cloud, and (3) learning a proper metric to describe the similarity of our fused global feature. The experiments on KITTI and KAIST datasets show that the proposed fused descriptor is more robust and discriminative than the single sensor descriptor.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4042
Author(s):  
Xiaqing Ding ◽  
Fuzhang Han ◽  
Tong Yang ◽  
Yue Wang ◽  
Rong Xiong

Global localization is a fundamental ability for mobile robots. Considering the limitation of single type of sensor, fusing measurements from multiple sensors with complementary properties is a valuable task for study. In this paper, we propose a decoupled optimization-based framework for global–local sensor fusion, which fuses the intermittent 3D global positions and high-frequent 6D odometry poses to infer the 6D global localization results in real-time. The fusion process is formulated as estimating the relative transformation between global and local reference coordinates, translational extrinsic calibration, and the scale of the local pose estimator. We validate the full observability of the system under general movements, and further analyze the degenerated movement patterns where some related system state would be unobservable. A degeneration-aware sensor fusion method is designed which detects the degenerated directions before optimization, and adds constraints specifically along these directions to relieve the effect of the noise. The proposed degeneration-aware global–local sensor fusion method is validated in both simulation and real-world datasets with different sensor configurations, and shows its effectiveness in terms of accuracy and robustness compared with other decoupled sensor fusion methods for global localization.


2001 ◽  
Author(s):  
Jane Xiaojing Yuan ◽  
Fernando Figueroa

Abstract The objective of sensor fusion is to synergistically combine different sources of sensory information into one representational format to provide more complete and precise interpretation of the system. A generic sensor fusion framework based on a highly autonomous sensor (HAS) model is presented. The framework provides freedom to choose different data fusion methods and combine them together to achieve better performance. In the context of HAS’s, this paper describes a hierarchical decentralized sensor-fusion approach based on a qualitative theory to interpret measurements, and on qualitative procedures to reason and make decisions based on the measurement interpretations. In this manner, heuristic fusion methods are applied at a high-qualitative level as well as at a numerical level when necessary. This approach implements intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors.


Sign in / Sign up

Export Citation Format

Share Document