perceptual aliasing
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 8)

H-INDEX

4
(FIVE YEARS 2)

2020 ◽  
Vol 10 (19) ◽  
pp. 6829
Author(s):  
Song Xu ◽  
Huaidong Zhou ◽  
Wusheng Chou

Conventional approaches to global localization and navigation mainly rely on metric maps to provide precise geometric coordinates, which may cause the problem of large-scale structural ambiguity and lack semantic information of the environment. This paper presents a scalable vision-based topological mapping and navigation method for a mobile robot to work robustly and flexibly in large-scale environment. In the vision-based topological navigation, an image-based Monte Carlo localization method is presented to realize global topological localization based on image retrieval, in which fine-tuned local region features from an object detection convolutional neural network (CNN) are adopted to perform image matching. The combination of image retrieval and Monte Carlo provide the robot with the ability to effectively avoid perceptual aliasing. Additionally, we propose an effective visual localization method, simultaneously employing the global and local CNN features of images to construct discriminative representation for environment, which makes the navigation system more robust to the interference of occlusion, translation, and illumination. Extensive experimental results demonstrate that ERF-IMCS exhibits great performance in the robustness and efficiency of navigation.


2020 ◽  
Vol 39 (10-11) ◽  
pp. 1201-1221
Author(s):  
Yulun Tian ◽  
Katherine Liu ◽  
Kyel Ok ◽  
Loc Tran ◽  
Danette Allen ◽  
...  

We present a multi-robot system for GPS-denied search and rescue under the forest canopy. Forests are particularly challenging environments for collaborative exploration and mapping, in large part due to the existence of severe perceptual aliasing which hinders reliable loop closure detection for mutual localization and map fusion. Our proposed system features unmanned aerial vehicles (UAVs) that perform onboard sensing, estimation, and planning. When communication is available, each UAV transmits compressed tree-based submaps to a central ground station for collaborative simultaneous localization and mapping (CSLAM). To overcome high measurement noise and perceptual aliasing, we use the local configuration of a group of trees as a distinctive feature for robust loop closure detection. Furthermore, we propose a novel procedure based on cycle consistent multiway matching to recover from incorrect pairwise data associations. The returned global data association is guaranteed to be cycle consistent, and is shown to improve both precision and recall compared with the input pairwise associations. The proposed multi-UAV system is validated both in simulation and during real-world collaborative exploration missions at NASA Langley Research Center.


2020 ◽  
pp. 027836492091048
Author(s):  
Mathieu Nowakowski ◽  
Cyril Joly ◽  
Sébastien Dalibard ◽  
Nicolas Garcia ◽  
Fabien Moutarde

This article introduces an indoor topological localization algorithm that uses vision and Wi-Fi signals. Its main contribution is a novel way of merging data from these sensors. The designed system does not require knowledge of the building plan or the positions of the Wi-Fi access points. By making the Wi-Fi signature suited to the FABMAP algorithm, this work develops an early fusion framework that solves global localization and kidnapped robot problems. The resulting algorithm has been tested and compared with FABMAP visual localization, over data acquired by a Pepper robot in three different environments: an office building, a middle school, and a private apartment. Numerous runs of different robots have been realized over several months for a total covered distance of 6.4 km. Constraints were applied during acquisitions to make the experiments fitted to real use cases of Pepper robots. Without any tuning, our early fusion framework outperforms visual localization in all testing situations and with a significant margin in environments where vision faces problems such as moving objects or perceptual aliasing. In such conditions, 90.6% of estimated localizations are less than 5 m away from ground truth with our early fusion framework compared with 77.6% with visual localization. Furthermore, compared with other classical fusion strategies, the early fusion framework produces the best localization results because in all tested situations, it improves visual localization results without damaging them where Wi-Fi signals carry little information.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3331 ◽  
Author(s):  
Li ◽  
Meng ◽  
Xie ◽  
Zhang ◽  
Huang ◽  
...  

In real-world robotic navigation, some ambiguous environments contain symmetrical or featureless areas that may cause the perceptual aliasing of external sensors. As a result of that, the uncorrected localization errors will accumulate during the localization process, which imposes difficulties to locate a robot in such a situation. Using the ambiguity grid map (AGM), we address this problem by proposing a novel probabilistic localization method, referred to as AGM-based adaptive Monte Carlo localization. AGM has the capacity of evaluating the environmental ambiguity with average ambiguity error and estimating the possible localization error at a given pose. Benefiting from the constructed AGM, our localization method is derived from an improved Dynamic Bayes network to reason about the robot’s pose as well as the accumulated localization error. Moreover, a portal motion model is presented to achieve more reliable pose prediction without time-consuming implementation, and thus the accumulated localization error can be corrected immediately when the robot moving through an ambiguous area. Simulation and real-world experiments demonstrate that the proposed method improves localization reliability while maintains efficiency in ambiguous environments.


2019 ◽  
Vol 4 (2) ◽  
pp. 1232-1239 ◽  
Author(s):  
Pierre-Yves Lajoie ◽  
Siyi Hu ◽  
Giovanni Beltrame ◽  
Luca Carlone

2015 ◽  
Vol 27 (3) ◽  
pp. 293-304 ◽  
Author(s):  
Kousuke Inoue ◽  
◽  
Tamio Arai ◽  
Jun Ota ◽  
◽  
...  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270003/09.jpg"" width=""340"" />State representation</div> In this paper, we propose a method by which an agent can autonomously construct a state-representation to achieve state-identification with a sufficient Markovian property. Furthermore, the agent does this using continuous and multi-dimensional observationspace in partially observable environments. In order to deal with the non-Markovian property of the environment, a state-representation of a decision tree structure based on past observations and actions is used. This representation is gradually segmented to achieve appropriate state-distinction. Because the observation-space of the agent is not segmented in advance, the agent has to determine the cause of its state-representation insufficiency: (1) insufficient observation-space segmentation, or (2) perceptual aliasing. In the proposed method, the cause is determined using a statistical analysis of past experiences, and the method of state-segmentation is decided based on this cause. Results of simulations in two-dimensional grid-environments and experiments with real mobile robot navigating in two-dimensional continuous workspace show that an agent can successfully acquire navigation behaviors with many hidden states.


Sign in / Sign up

Export Citation Format

Share Document