scholarly journals Efficient RGB–D data processing for feature–based self–localization of mobile robots

Author(s):  
Marek Kraft ◽  
Michał Nowicki ◽  
Rudi Penne ◽  
Adam Schmidt ◽  
Piotr Skrzypczyński

Abstract The problem of position and orientation estimation for an active vision sensor that moves with respect to the full six degrees of freedom is considered. The proposed approach is based on point features extracted from RGB-D data. This work focuses on efficient point feature extraction algorithms and on methods for the management of a set of features in a single RGB-D data frame. While the fast, RGB-D-based visual odometry system described in this paper builds upon our previous results as to the general architecture, the important novel elements introduced here are aimed at improving the precision and robustness of the motion estimate computed from the matching point features of two RGB-D frames. Moreover, we demonstrate that the visual odometry system can serve as the front-end for a pose-based simultaneous localization and mapping solution. The proposed solutions are tested on publicly available data sets to ensure that the results are scientifically verifiable. The experimental results demonstrate gains due to the improved feature extraction and management mechanisms, whereas the performance of the whole navigation system compares favorably to results known from the literature.

Author(s):  
M. Peter ◽  
S. R. U. N. Jafri ◽  
G. Vosselman

Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of <i>n</i> points with respect to the line is <i>σ</i>&amp;thinsp;/&amp;thinsp;&amp;radic;<i>n</i>. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
David Valiente García ◽  
Lorenzo Fernández Rojo ◽  
Arturo Gil Aparicio ◽  
Luis Payá Castelló ◽  
Oscar Reinoso García

In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.


Author(s):  
JAMES L. CROWLEY ◽  
PHILIPPE BOBET ◽  
MOUAFAK MESRABI

This paper describes a layered control system for a binocular stereo head. It begins by a discussion of the principles of layered control. It then describes the mechanical configuration for a binocular camera head with six degrees of freedom. A device level controller is presented which permits an active vision system to command the position of a binocular gaze point in the scene. The final section describes the design of perceptual actions which exploit this device level controller.


2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773566 ◽  
Author(s):  
Lifeng An ◽  
Xinyu Zhang ◽  
Hongbo Gao ◽  
Yuchao Liu

Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.


2020 ◽  
pp. 67-73
Author(s):  
N.D. YUsubov ◽  
G.M. Abbasova

The accuracy of two-tool machining on automatic lathes is analyzed. Full-factor models of distortions and scattering fields of the performed dimensions, taking into account the flexibility of the technological system on six degrees of freedom, i. e. angular displacements in the technological system, were used in the research. Possibilities of design and control of two-tool adjustment are considered. Keywords turning processing, cutting mode, two-tool setup, full-factor model, accuracy, angular displacement, control, calculation [email protected]


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3740
Author(s):  
Olafur Oddbjornsson ◽  
Panos Kloukinas ◽  
Tansu Gokce ◽  
Kate Bourne ◽  
Tony Horseman ◽  
...  

This paper presents the design, development and evaluation of a unique non-contact instrumentation system that can accurately measure the interface displacement between two rigid components in six degrees of freedom. The system was developed to allow measurement of the relative displacements between interfaces within a stacked column of brick-like components, with an accuracy of 0.05 mm and 0.1 degrees. The columns comprised up to 14 components, with each component being a scale model of a graphite brick within an Advanced Gas-cooled Reactor core. A set of 585 of these columns makes up the Multi Layer Array, which was designed to investigate the response of the reactor core to seismic inputs, with excitation levels up to 1 g from 0 to 100 Hz. The nature of the application required a compact and robust design capable of accurately recording fully coupled motion in all six degrees of freedom during dynamic testing. The novel design implemented 12 Hall effect sensors with a calibration procedure based on system identification techniques. The measurement uncertainty was ±0.050 mm for displacement and ±0.052 degrees for rotation, and the system can tolerate loss of data from two sensors with the uncertainly increasing to only 0.061 mm in translation and 0.088 degrees in rotation. The system has been deployed in a research programme that has enabled EDF to present seismic safety cases to the Office for Nuclear Regulation, resulting in life extension approvals for several reactors. The measurement system developed could be readily applied to other situations where the imposed level of stress at the interface causes negligible material strain, and accurate non-contact six-degree-of-freedom interface measurement is required.


Sign in / Sign up

Export Citation Format

Share Document