scholarly journals Visual Odometry and Place Recognition Fusion for Vehicle Position Tracking in Urban Environments

Sensors ◽  
2018 ◽  
Vol 18 (4) ◽  
pp. 939 ◽  
Author(s):  
Safa Ouerghi ◽  
Rémi Boutteau ◽  
Xavier Savatier ◽  
Fethi Tlili
Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4126 ◽  
Author(s):  
Taeklim Kim ◽  
Tae-Hyoung Park

Detection and distance measurement using sensors is not always accurate. Sensor fusion makes up for this shortcoming by reducing inaccuracies. This study, therefore, proposes an extended Kalman filter (EKF) that reflects the distance characteristics of lidar and radar sensors. The sensor characteristics of the lidar and radar over distance were analyzed, and a reliability function was designed to extend the Kalman filter to reflect distance characteristics. The accuracy of position estimation was improved by identifying the sensor errors according to distance. Experiments were conducted using real vehicles, and a comparative experiment was done combining sensor fusion using a fuzzy, adaptive measure noise and Kalman filter. Experimental results showed that the study’s method produced accurate distance estimations.


2019 ◽  
Vol 26 (3) ◽  
pp. 243-256 ◽  
Author(s):  
Patrice Delmas ◽  
Trevor Gee

2016 ◽  
Vol 9 (1) ◽  
pp. 82-94
Author(s):  
Rahul Gautam ◽  
Harsh Jain ◽  
Mayank Poply ◽  
Rajkumar Jain ◽  
Mukul Anand ◽  
...  

Abstract This paper solves the problem of localization for indoor environments using visual place recognition, visual odometry and experience based localization using a camera. Our main motivation is just like a human is able to recall from its past experience, a robot should be able to use its recorded visual memory in order to determine its location. Currently experience based localization has been used in constrained environments like outdoor roads, where the robot is constrained to the same set of locations during every visit. This paper adapts the same technology to wide open maps like halls wherein the robot is not constrained to specific locations. When a robot is turned on in a room, it first uses visual place recognition using histogram of oriented gradients and support vector machine in order to predict which room it is in. It then scans its surroundings and uses a nearest neighbor search of the robot’s experience coupled with visual odometry for localization. We present the results of our approach test on a dynamic environment comprising of three rooms. The dataset consists of approximately 5000 monocular and 5000 depth images.


Sign in / Sign up

Export Citation Format

Share Document