DT-Loc: Monocular Visual Localization on HD Vector Map Using Distance Transforms of 2D Semantic Detections

Author(s):  
Chi Zhang ◽  
Hao Liu ◽  
Hao Li ◽  
Kun Guo ◽  
Kuiyuan Yang ◽  
...  
2019 ◽  
Vol 9 (4) ◽  
pp. 642 ◽  
Author(s):  
Xu Xi ◽  
Xinchang Zhang ◽  
Weidong Liang ◽  
Qinchuan Xin ◽  
Pengcheng Zhang

Digital watermarking is important for the copyright protection of electronic data, but embedding watermarks into vector maps could easily lead to changes in map precision. Zero-watermarking, a method that does not embed watermarks into maps, could avoid altering vector maps but often lack of robustness. This study proposes a dual zero-watermarking scheme that improves watermark robustness for two-dimensional (2D) vector maps. The proposed scheme first extracts the feature vertices and non-feature vertices of the vector map with the Douglas-Peucker algorithm and subsequently constructs the Delaunay Triangulation Mesh (DTM) to form a topological feature sequence of feature vertices as well as the Singular Value Decomposition (SVD) matrix to form intrinsic feature sequence of non-feature vertices. Next, zero-watermarks are obtained by executing exclusive disjunction (XOR) with the encrypted watermark image under the Arnold scramble algorithm. The experimental results show that the scheme that synthesizes both the feature and non-feature information improves the watermark capacity. Making use of complementary information between feature and non-feature vertices considerably improves the overall robustness of the watermarking scheme. The proposed dual zero-watermarking scheme combines the advantages of individual watermarking schemes and is robust against such attacks as geometric attacks, vertex attacks and object attacks.


2021 ◽  
Vol 7 (2) ◽  
pp. 20
Author(s):  
Carlos Lassance ◽  
Yasir Latif ◽  
Ravi Garg ◽  
Vincent Gripon ◽  
Ian Reid

Vision-based localization is the problem of inferring the pose of the camera given a single image. One commonly used approach relies on image retrieval where the query input is compared against a database of localized support examples and its pose is inferred with the help of the retrieved items. This assumes that images taken from the same places consist of the same landmarks and thus would have similar feature representations. These representations can learn to be robust to different variations in capture conditions like time of the day or weather. In this work, we introduce a framework which aims at enhancing the performance of such retrieval-based localization methods. It consists in taking into account additional information available, such as GPS coordinates or temporal proximity in the acquisition of the images. More precisely, our method consists in constructing a graph based on this additional information that is later used to improve reliability of the retrieval process by filtering the feature representations of support and/or query images. We show that the proposed method is able to significantly improve the localization accuracy on two large scale datasets, as well as the mean average precision in classical image retrieval scenarios.


Sign in / Sign up

Export Citation Format

Share Document