robot localization problem
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Author(s):  
O. Arda Vanli ◽  
Clark N. Taylor

Abstract This paper presents a method to estimate the covariances of the inputs in a factor-graph formulation for localization under non-line-of-sight conditions. A general solution, based on covariance estimation and M estimators in linear regression problems, is presented that is shown to give unbiased estimators of multiple variances and are robust against outliers. An iteratively re-weighted least squares (IRLS) algorithm is proposed to jointly compute the proposed variance estimators and the state estimates for the non-linear factor graph optimization. The efficacy of the method is illustrated in a simulation study using a robot localization problem under various process and measurement models and measurement outlier scenarios. A case study involving a Global Positioning System (GPS) based localization in an urban environment and data containing multipath problems demonstrates the application of the proposed technique.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1772
Author(s):  
Gengyu Ge ◽  
Yi Zhang ◽  
Qin Jiang ◽  
Wei Wang

Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization (MCL) method spreads particles on the map and calculates the position of the robot by a probabilistic algorithm. However, this can be difficult, especially in symmetrical environments, because landmarks or features may not be sufficient to determine the robot’s orientation. Sometimes the position is not unique if a robot does not stay at the geometric center. This paper presents a novel approach to solving the robot localization problem in a symmetrical environment using the visual features-assisted method. Laser range measurements are used to estimate the robot position, while visual features determine its orientation. Firstly, we convert laser range scans raw data into coordinate data and calculate the geometric center. Secondly, we calculate the new distance from the geometric center point to all end points and find the longest distances. Then, we compare those distances, fit lines, extract corner points, and calculate the distance between adjacent corner points to determine whether the environment is symmetrical. Finally, if the environment is symmetrical, visual features based on the ORB keypoint detector and descriptor will be added to the system to determine the orientation of the robot. The experimental results show that our approach can successfully determine the position of the robot in a symmetrical environment, while ordinary MCL and its extension localization method always fail.


2010 ◽  
Vol 29 (3-4) ◽  
pp. 235-251 ◽  
Author(s):  
Mauro Boccadoro ◽  
Francesco Martinelli ◽  
Stefano Pagnottelli

2009 ◽  
Vol 39 (2) ◽  
pp. 461-490 ◽  
Author(s):  
Sven Koenig ◽  
Joseph S. B. Mitchell ◽  
Apurva Mudgal ◽  
Craig Tovey

2007 ◽  
Vol 24 (3) ◽  
pp. 267-283 ◽  
Author(s):  
Arnaud Clérentin ◽  
Mélanie Delafosse ◽  
Laurent Delahoche ◽  
Bruno Marhic ◽  
Anne-Marie Jolly-Desodt

2004 ◽  
Vol 37 (8) ◽  
pp. 394-399
Author(s):  
Mélanie Delafosse ◽  
Arnaud Clerentin ◽  
Laurent Delahoche ◽  
Eric Brassart ◽  
Bruno Marhic

1997 ◽  
Vol 26 (4) ◽  
pp. 1120-1138 ◽  
Author(s):  
Leonidas J. Guibas ◽  
Rajeev Motwani ◽  
Prabhakar Raghavan

Sign in / Sign up

Export Citation Format

Share Document