scholarly journals Deep SIMBAD: Active Landmark-based Self-localization Using Similarity-based Scene Descriptor

2021 ◽  
Author(s):  
kanji tanaka

Landmark-based robot self-localization has attracted recent research interest as an efficient maintenance-free approach to visual place recognition (VPR) across domains (e.g., times of the day, weathers, seasons). However, landmark-based self-localization can be an ill-posed problem for a passive observer (e.g., manual robot control), as many viewpoints may not provide effective landmark view. Here, we consider active self-localization task by an active observer, and present a novel reinforcement-learning (RL) -based next-best-view (NBV) planner. Our contributions are summarized as follows. (1) SIMBAD-based VPR: We present a landmark ranking -based compact scene descriptor by introducing a deep-learning extension of similarity-based pattern recognition (SIMBAD). (2) VPR-to-NBV knowledge transfer: We tackle the challenge of RL under uncertainty (i.e., active self-localization) by transferring the VPR's state recognition ability to NBV. (3) NNQL-based NBV: We view the available VPR as the experience database by adapting a nearest-neighbor -based approximation of Q-learning (NNQL). The result is an extremely compact data structure that compresses both the VPR and NBV modules into a single incremental inverted index. Experiments using public NCLT dataset validate the effectiveness of the proposed approach.

2021 ◽  
Author(s):  
kanji tanaka

Training a next-best-view (NBV) planner for active cross-domain self-localization is an important and challenging problem. Unlike typical in-domain settings, the planner can no longer assume the environment state being constant, but must treat it as a high-dimensional component of the state variable. This study is motivated by the ability of recent visual place recognition (VPR) techniques to recognize such a high-dimensional environment state in the presence of domain-shifts. Thus, we wish to transfer the state recognition ability from VPR to NBV. However, such a VPR-to-NBV knowledge transfer is a non-trivial issue for which no known solution exists. Here, we propose to use a reciprocal rank feature, derived from the field of transfer learning, as the dark knowledge to transfer. Specifically, our approach is based on the following two observations: (1) The environment state can be compactly represented by a local map descriptor, which is compatible with typical input formats (e.g., image, point cloud, graph) of VPR systems, and (2) An arbitrary VPR system (e.g., Bayes filter, image retrieval, deep neural network) can be modeled as a ranking function. Experiments with nearest neighbor Qlearning (NNQL) show that our approach can obtain a practical NBV planner even under severe domain-shifts.


2020 ◽  
Vol 30 (3) ◽  
pp. 117-124
Author(s):  
Georg Kunert ◽  
Thorsten Pawlette ◽  
Sven Hartmann

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Huiqin Li ◽  
Yanling Li ◽  
Chuan He ◽  
Hui Zhang ◽  
Jianwei Zhan

Radar working state recognition is the basis of cognitive electronic countermeasures. Aiming at the problem that the traditional supervised recognition technology is difficult to obtain prior information and process the incremental signal data stream, an unsupervised and incremental recognition method is proposed. This method is based on a backpropagation (BP) neural network to construct a recognition model. Firstly, the particle swarm optimization (PSO) algorithm is used to optimize the preference parameter and damping factor of affinity propagation (AP) clustering. Then, the PSO-AP algorithm is used to cluster unlabeled samples to obtain the best initial clustering results. The clustering results are input as training samples into the BP neural network to train the recognition model, which realizes the unsupervised recognition. Secondly, the incremental AP (IAP) algorithm based on the K -nearest neighbor (KNN) idea is used to divide the incremental samples by calculating the closeness between samples. The incremental samples are added to the BP recognition model as a new known state to complete the model update, which realizes incremental recognition. The simulation experiments on three types of radar data sets show that the recognition accuracy of the proposed model can reach more than 83%, which verifies the feasibility and effectiveness of the method. In addition, compared with the AP algorithm and K -means algorithm, the improved AP method improves 59.4%, 17.6%, and 53.5% in purity, rand index (RI), and F -measure indexes, respectively, and the running time is at least 34.8% shorter than the AP algorithm. The time of processing incremental data is greatly reduced, and the clustering efficiency is improved. Experimental results show that this method can quickly and accurately identify radar working state and play an important role in giving full play to the adaptability and timeliness of the cognitive electronic countermeasures.


2016 ◽  
Vol 9 (1) ◽  
pp. 82-94
Author(s):  
Rahul Gautam ◽  
Harsh Jain ◽  
Mayank Poply ◽  
Rajkumar Jain ◽  
Mukul Anand ◽  
...  

Abstract This paper solves the problem of localization for indoor environments using visual place recognition, visual odometry and experience based localization using a camera. Our main motivation is just like a human is able to recall from its past experience, a robot should be able to use its recorded visual memory in order to determine its location. Currently experience based localization has been used in constrained environments like outdoor roads, where the robot is constrained to the same set of locations during every visit. This paper adapts the same technology to wide open maps like halls wherein the robot is not constrained to specific locations. When a robot is turned on in a room, it first uses visual place recognition using histogram of oriented gradients and support vector machine in order to predict which room it is in. It then scans its surroundings and uses a nearest neighbor search of the robot’s experience coupled with visual odometry for localization. We present the results of our approach test on a dynamic environment comprising of three rooms. The dataset consists of approximately 5000 monocular and 5000 depth images.


2020 ◽  
Author(s):  
kanji tanaka

This paper addresses the problem of active visual place recognition (VPR) from a novel perspective of long-term autonomy. In our approach, a next-best-view (NBV) planner plans an optimal action-observation-sequence to maximize the expected cost-performance for a visual route classification task. A difficulty arises from the fact that the NBV planner is trained and tested in different domains (times of day, weather conditions, and seasons). Existing NBV methods may be confused and deteriorated by the domain-shifts, and require significant efforts for adapting them to a new domain. We address this issue by a novel deep convolutional neural network (DNN) -based NBV planner that does not require the adaptation. Our main contributions in this paper are summarized as follows: (1) We present a novel domain-invariant NBV planner that is specifically tailored for DNN-based VPR. (2) We formulate the active VPR as a POMDP problem and present a feasible solution to address the inherent intractability. Specifically, the probability distribution vector (PDV) output by the available DNN is used as a domain-invariant observation model without the need to retrain it. (3) We verify efficacy of the proposed approach through challenging cross-season VPR experiments, where it is confirmed that the proposed approach clearly outperforms the previous single-view-based or multi-view-based VPR in terms of VPR accuracy and/or action-observation-cost.


Sign in / Sign up

Export Citation Format

Share Document