The UMA-VI dataset: Visual–inertial odometry in low-textured and dynamic illumination environments

2020 ◽  
Vol 39 (9) ◽  
pp. 1052-1060
Author(s):  
David Zuñiga-Noël ◽  
Alberto Jaenal ◽  
Ruben Gomez-Ojeda ◽  
Javier Gonzalez-Jimenez

This article presents a visual–inertial dataset gathered in indoor and outdoor scenarios with a handheld custom sensor rig, for over 80 min in total. The dataset contains hardware-synchronized data from a commercial stereo camera (Bumblebee®2), a custom stereo rig, and an inertial measurement unit. The most distinctive feature of this dataset is the strong presence of low-textured environments and scenes with dynamic illumination, which are recurrent corner cases of visual odometry and simultaneous localization and mapping (SLAM) methods. The dataset comprises 32 sequences and is provided with ground-truth poses at the beginning and the end of each of the sequences, thus allowing the accumulated drift to be measured in each case. We provide a trial evaluation of five existing state-of-the-art visual and visual–inertial methods on a subset of the dataset. We also make available open-source tools for evaluation purposes, as well as the intrinsic and extrinsic calibration parameters of all sensors in the rig. The dataset is available for download at http://mapir.uma.es/work/uma-visual-inertial-dataset

2018 ◽  
Vol 51 (9-10) ◽  
pp. 488-497
Author(s):  
Zheng Fang ◽  
Tao Yu ◽  
Qian Wang ◽  
Chao Wang ◽  
Siyuan Chen

Background: Acceleration and angular velocity sensors are commonly used in the measurement of gait parameters. A representative application calculates the limb segment dip angle. The rotation angle is typically deduced by a conventional Kalman filter, which includes the use of two empirically derived parameters. We improved this conventional method by introducing colony algorithm to find the optimal parameter combination instead of empirically assignment. Method: To achieve optimal results, a servo motor with an inertial measurement unit was used to simulate human limb segment motion according to programmed rotation angles that was employed as the ground truth. To minimize the bias between the ground truth and the calculated result, the ant colony algorithm was employed to obtain the optimal Kalman filter parameter combination in two-dimensional space. Results: By the motor experiment, the sum of the angle squared error was only 1.9305 rad2, much better than the 6.7723 rad2 error by the conventional method. The optimal parameter combination obtained was then used in a human experiment involving a basketball player. The frames from video of a whole gait cycle period were all showed with a corresponding deduced thigh dip angle curve diagram. Conclusion: The colony algorithm for parameters optimization results in less angle errors deduced by Kalman filter using the data from inertial measurement unit. The subject experiment verified the feasibility and performance of this method.


2019 ◽  
pp. 027836491985336 ◽  
Author(s):  
Zheng Huai ◽  
Guoquan Huang

In this paper, we propose a novel robocentric formulation of the visual–inertial navigation system (VINS) within a sliding-window filtering framework and design an efficient, lightweight, robocentric visual–inertial odometry (R-VIO) algorithm for consistent motion tracking even in challenging environments using only a monocular camera and a six-axis inertial measurement unit (IMU). The key idea is to deliberately reformulate the VINS with respect to a moving local frame, rather than a fixed global frame of reference as in the standard world-centric VINS, in order to obtain relative motion estimates of higher accuracy for updating global pose. As an immediate advantage of this robocentric formulation, the proposed R-VIO can start from an arbitrary pose, without the need to align the initial orientation with the global gravitational direction. More importantly, we analytically show that the linearized robocentric VINS does not undergo the observability mismatch issue as in the standard world-centric counterparts that has been identified in the literature as the main cause of estimation inconsistency. Furthermore, we investigate in depth the special motions that degrade the performance in the world-centric formulation and show that such degenerate cases can be easily compensated for by the proposed robocentric formulation, without resorting to additional sensors as in the world-centric formulation, thus leading to better robustness. The proposed R-VIO algorithm has been extensively validated through both Monte Carlo simulation and real-world experiments with different sensing platforms navigating in different environments, and shown to achieve better (or competitive at least) performance than the state-of-the-art VINS, in terms of consistency, accuracy, and efficiency.


2019 ◽  
Vol 38 (14) ◽  
pp. 1549-1559 ◽  
Author(s):  
Maxime Ferrera ◽  
Vincent Creuze ◽  
Julien Moras ◽  
Pauline Trouvé-Peloux

We present a new dataset, dedicated to the development of simultaneous localization and mapping methods for underwater vehicles navigating close to the seabed. The data sequences composing this dataset are recorded in three different environments: a harbor at a depth of a few meters, a first archeological site at a depth of 270 meters, and a second site at a depth of 380 meters. The data acquisition is performed using remotely operated vehicles equipped with a monocular monochromatic camera, a low-cost inertial measurement unit, a pressure sensor, and a computing unit, all embedded in a single enclosure. The sensors’ measurements are recorded synchronously on the computing unit and 17 sequences have been created from all the acquired data. These sequences are made available in the form of ROS bags and as raw data. For each sequence, a trajectory has also been computed offline using a structure-from-motion library in order to allow the comparison with real-time localization methods. With the release of this dataset, we wish to provide data difficult to acquire and to encourage the development of vision-based localization methods dedicated to the underwater environment. The dataset can be downloaded from: http://www.lirmm.fr/aqualoc/


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142199992
Author(s):  
Ping Jiang ◽  
Liang Chen ◽  
Hang Guo ◽  
Min Yu ◽  
Jian Xiong

As an important research field of mobile robot, simultaneous localization and mapping technology is the core technology to realize intelligent autonomous mobile robot. Aiming at the problems of low positioning accuracy of Lidar (light detection and ranging) simultaneous localization and mapping with nonlinear and non-Gaussian noise characteristics, this article presents a mobile robot simultaneous localization and mapping method that combines Lidar and inertial measurement unit to set up a multi-sensor integrated system and uses a rank Kalman filtering to estimate the robot motion trajectory through inertial measurement unit and Lidar observations. Rank Kalman filtering is similar to the Gaussian deterministic point sampling filtering algorithm in structure, but it does not need to meet the assumptions of Gaussian distribution. It completely calculates the sampling points and the sampling points weights based on the correlation principle of rank statistics. It is suitable for nonlinear and non-Gaussian systems. With multiple experimental tests of small-scale arc trajectories, we can see that compared with the alone Lidar simultaneous localization and mapping algorithm, the new algorithm reduces the mean error of the indoor mobile robot in the X direction from 0.0928 m to 0.0451 m, with an improved accuracy rate of 46.39%, and the mean error in the Y direction from 0.0772 m to 0.0405 m, which improves the accuracy rate of 48.40%. Compared with the extended Kalman filter fusion algorithm, the new algorithm reduces the mean error of the indoor mobile robot in the X direction from 0.0597 m to 0.0451 m, with an improved accuracy rate of 24.46%, and the mean error in the Y direction from 0.0537 m to 0.0405 m, which improves the accuracy rate of 24.58%. Finally, we also tested on a large-scale rectangular trajectory, compared with the extended Kalman filter algorithm, rank Kalman filtering improves the accuracy of 23.84% and 25.26% in the X and Y directions, respectively, it is verified that the accuracy of the algorithm proposed in this article has been improved.


2018 ◽  
Vol 37 (1) ◽  
pp. 13-20 ◽  
Author(s):  
Martin Miller ◽  
Soon-Jo Chung ◽  
Seth Hutchinson

We present a dataset collected from a canoe along the Sangamon River in Illinois. The canoe was equipped with a stereo camera, an inertial measurement unit (IMU), and a global positioning system (GPS) device, which provide visual data suitable for stereo or monocular applications, inertial measurements, and position data for ground truth. We recorded a canoe trip up and down the river for 44 minutes covering a 2.7 km round trip. The dataset adds to those previously recorded in unstructured environments and is unique in that it is recorded on a river, which provides its own set of challenges and constraints that are described in this paper. The dataset is stored on the Illinois Data Bank and can be accessed at: https://doi.org/10.13012/B2IDB-9342111_V1 .


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1846
Author(s):  
Lei He ◽  
Zhe Jin ◽  
Zhenhai Gao

Simultaneous localization and mapping have become a basic requirement for most automatic moving robots. However, the LiDAR scan suffers from skewing caused by high-acceleration motion that reduces the precision in the latter mapping or classification process. In this study, we improve the quality of mapping results through a de-skewing LiDAR scan. By integrating high-sampling frequency IMU (inertial measurement unit) measurements and establishing a motion equation for time, we can get the pose of every point in this scan’s frame. Then, all points in this scan are corrected and transformed into the frame of the first point. We expand the scope of optimization range from the current scan to a local range of point clouds that not only considers the motion of LiDAR but also takes advantage of the neighboring LiDAR scans. Finally, we validate the performance of our algorithm in indoor and outdoor experiments to compare the mapping results before and after de-skewing. Experimental results show that our method smooths the scan skewing on each channel and improves the mapping accuracy.


Sign in / Sign up

Export Citation Format

Share Document