Multi-sensor localization - Visual Odometry as a low cost proprioceptive sensor

Author(s):  
Adrien Bak ◽  
Dominique Gruyer ◽  
Samia Bouchafa ◽  
Didier Aubert
2017 ◽  
Vol 14 (03) ◽  
pp. 1750006
Author(s):  
Xin Wang ◽  
Pieter Jonker

Using active vision to perceive surroundings instead of just passively receiving information, humans develop the ability to explore unknown environments. Humanoid robot active vision research has already half a century history. It covers comprehensive research areas and plenty of studies have been done. Nowadays, the new trend is to use a stereo setup or a Kinect with neck movements to realize active vision. However, human perception is a combination of eye and neck movements. This paper presents an advanced active vision system that works in a similar way as human vision. The main contributions are: a design of a set of controllers that mimic eye and neck movements, including saccade eye movements, pursuit eye movements, vestibulo-ocular reflex eye movements and vergence eye movements; an adaptive selection mechanism based on properties of objects to automatically choose an optimal tracking algorithm; a novel Multimodal Visual Odometry Perception method that combines stereopsis and convergence to enable robots to perform both precise action in action space and scene exploration in personal space. Experimental results prove the effectiveness and robustness of our system. Besides, the system works in real-time constraints with low-cost cameras and motors, providing an affordable solution for industrial applications.


2020 ◽  
Vol 58 (1) ◽  
pp. 57-75
Author(s):  
Mario Kučić ◽  
Marko Valčić

Typically, ships are designed for open sea navigation and thus research of autonomous ships is mostly done for that particular area. This paper explores the possibility of using low-cost sensors for localization inside the small navigation area. The localization system is based on the technology used for developing autonomous cars. The main part of the system is visual odometry using stereo cameras fused with Inertial Measurement Unit (IMU) data coupled with Kalman and particle filters to get decimetre level accuracy inside a basin for different surface conditions. The visual odometry uses cropped frames for stereo cameras and Good features to track algorithm for extracting features to get depths for each feature that is used for estimation of ship model movement. Experimental results showed that the proposed system could localize itself within a decimetre accuracy implying that there is a real possibility for ships in using visual odometry for autonomous navigation on narrow waterways, which can have a significant impact on future transportation.


Author(s):  
B. Leroux ◽  
J. Cali ◽  
J. Verdun ◽  
L. Morel ◽  
H. He

Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2–3 centimeters between the control point coordinates measured and those already known.


Author(s):  
M. M. Nawaf ◽  
J.-M. Boï ◽  
D. Merad ◽  
J.-P. Royer ◽  
P. Drap

This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.


2018 ◽  
Vol 06 (04) ◽  
pp. 221-230
Author(s):  
Dayang Nur Salmi Dharmiza Awang Salleh ◽  
Emmanuel Seignez

Accurate localization is the key component in intelligent vehicles for navigation. With the rapid development especially in urban area, the increasing high-rise buildings results in urban canyon and road network has become more complex. These affect the vehicle navigation performance particularly in the event of poor Global Positioning System (GPS) signal. Therefore, it is essential to develop a perceptive localization system to overcome this problem. This paper proposes a localization approach that exhibits the advantages of Visual Odometry (VO) in low-cost data fusion to reduce vehicle localization error and improve its response rate in path selection. The data used are sourced from camera as visual sensor, low-cost GPS and free digital map from OpenStreetMap. These data are fused by Particle filter (PF) where our method estimates the curvature similarity score of VO trajectory curve with candidate ways extracted from the map. We evaluate the robustness of our proposed approach with three types of GPS errors such as random noise, biased noise and GPS signal loss in an instance of ambiguous road decision. Our results show that this method is able to detect and select the correct path simultaneously which contributes to a swift path planning.


Author(s):  
Roman Lesjak ◽  
Agnes Rita Koller ◽  
Manfred Klopschitz ◽  
Ulrike Kleb ◽  
Gordana Djuras ◽  
...  
Keyword(s):  
Low Cost ◽  

2019 ◽  
Vol 11 (18) ◽  
pp. 2139
Author(s):  
Ke Wang ◽  
Xin Huang ◽  
JunLan Chen ◽  
Chuan Cao ◽  
Zhoubing Xiong ◽  
...  

We present a novel low-cost visual odometry method of estimating the ego-motion (self-motion) for ground vehicles by detecting the changes that motion induces on the images. Different from traditional localization methods that use differential global positioning system (GPS), precise inertial measurement unit (IMU) or 3D Lidar, the proposed method only leverage data from inexpensive visual sensors of forward and backward onboard cameras. Starting with the spatial-temporal synchronization, the scale factor of backward monocular visual odometry was estimated based on the MSE optimization method in a sliding window. Then, in trajectory estimation, an improved two-layers Kalman filter was proposed including orientation fusion and position fusion . Where, in the orientation fusion step, we utilized the trajectory error space represented by unit quaternion as the state of the filter. The resulting system enables high-accuracy, low-cost ego-pose estimation, along with providing robustness capability of handing camera module degradation by automatic reduce the confidence of failed sensor in the fusion pipeline. Therefore, it can operate in the presence of complex and highly dynamic motion such as enter-in-and-out tunnel entrance, texture-less, illumination change environments, bumpy road and even one of the cameras fails. The experiments carried out in this paper have proved that our algorithm can achieve the best performance on evaluation indexes of average in distance (AED), average in X direction (AEX), average in Y direction (AEY), and root mean square error (RMSE) compared to other state-of-the-art algorithms, which indicates that the output results of our approach is superior to other methods.


Author(s):  
Erliang Yao ◽  
Hexin Zhang ◽  
Haitao Song ◽  
Guoliang Zhang

Purpose To realize stable and precise localization in the dynamic environments, the authors propose a fast and robust visual odometry (VO) approach with a low-cost Inertial Measurement Unit (IMU) in this study. Design/methodology/approach The proposed VO incorporates the direct method with the indirect method to track the features and to optimize the camera pose. It initializes the positions of tracked pixels with the IMU information. Besides, the tracked pixels are refined by minimizing the photometric errors. Due to the small convergence radius of the indirect method, the dynamic pixels are rejected. Subsequently, the camera pose is optimized by minimizing the reprojection errors. The frames with little dynamic information are selected to create keyframes. Finally, the local bundle adjustment is performed to refine the poses of the keyframes and the positions of 3-D points. Findings The proposed VO approach is evaluated experimentally in dynamic environments with various motion types, suggesting that the proposed approach achieves more accurate and stable location than the conventional approach. Moreover, the proposed VO approach works well in the environments with the motion blur. Originality/value The proposed approach fuses the indirect method and the direct method with the IMU information, which improves the localization in dynamic environments significantly.


Sign in / Sign up

Export Citation Format

Share Document