scholarly journals Cross-Domain Co-Occurring Feature for Visible-Infrared Image Matching

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 17681-17698 ◽  
Author(s):  
Jing Li ◽  
Congcong Li ◽  
Tao Yang ◽  
Zhaoyang Lu
2019 ◽  
Vol 11 (19) ◽  
pp. 2243 ◽  
Author(s):  
Weiquan Liu ◽  
Cheng Wang ◽  
Xuesheng Bian ◽  
Shuting Chen ◽  
Wei Li ◽  
...  

Establishing the spatial relationship between 2D images captured by real cameras and 3D models of the environment (2D and 3D space) is one way to achieve the virtual–real registration for Augmented Reality (AR) in outdoor environments. In this paper, we propose to match the 2D images captured by real cameras and the rendered images from the 3D image-based point cloud to indirectly establish the spatial relationship between 2D and 3D space. We call these two kinds of images as cross-domain images, because their imaging mechanisms and nature are quite different. However, unlike real camera images, the rendered images from the 3D image-based point cloud are inevitably contaminated with image distortion, blurred resolution, and obstructions, which makes image matching with the handcrafted descriptors or existing feature learning neural networks very challenging. Thus, we first propose a novel end-to-end network, AE-GAN-Net, consisting of two AutoEncoders (AEs) with Generative Adversarial Network (GAN) embedding, to learn invariant feature descriptors for cross-domain image matching. Second, a domain-consistent loss function, which balances image content and consistency of feature descriptors for cross-domain image pairs, is introduced to optimize AE-GAN-Net. AE-GAN-Net effectively captures domain-specific information, which is embedded into the learned feature descriptors, thus making the learned feature descriptors robust against image distortion, variations in viewpoints, spatial resolutions, rotation, and scaling. Experimental results show that AE-GAN-Net achieves state-of-the-art performance for image patch retrieval with the cross-domain image patch dataset, which is built from real camera images and the rendered images from 3D image-based point cloud. Finally, by evaluating virtual–real registration for AR on a campus by using the cross-domain image matching results, we demonstrate the feasibility of applying the proposed virtual–real registration to AR in outdoor environments.


2011 ◽  
Vol 30 (6) ◽  
pp. 1-10 ◽  
Author(s):  
Abhinav Shrivastava ◽  
Tomasz Malisiewicz ◽  
Abhinav Gupta ◽  
Alexei A. Efros

Author(s):  
Abhinav Shrivastava ◽  
Tomasz Malisiewicz ◽  
Abhinav Gupta ◽  
Alexei A. Efros

2019 ◽  
Vol 127 (11-12) ◽  
pp. 1738-1750 ◽  
Author(s):  
Bailey Kong ◽  
James Supanc̆ic̆ ◽  
Deva Ramanan ◽  
Charless C. Fowlkes

IEEE Access ◽  
2017 ◽  
Vol 5 ◽  
pp. 23190-23203 ◽  
Author(s):  
Jing Li ◽  
Congcong Li ◽  
Tao Yang ◽  
Zhaoyang Lu

2021 ◽  
Vol 13 (22) ◽  
pp. 4618
Author(s):  
Xupei Zhang ◽  
Zhanzhuang He ◽  
Zhong Ma ◽  
Zhongxi Wang ◽  
Li Wang

Local features extraction is a crucial technology for image matching navigation of an unmanned aerial vehicle (UAV), where it aims to accurately and robustly match a real-time image and a geo-referenced image to obtain the position update information of the UAV. However, it is a challenging task due to the inconsistent image capture conditions, which will lead to extreme appearance changes, especially the different imaging principle between an infrared image and RGB image. In addition, the sparsity and labeling complexity of existing public datasets hinder the development of learning-based methods in this research area. This paper proposes a novel learning local features extraction method, which uses local features extracted by deep neural network to find the correspondence features on the satellite RGB reference image and real-time infrared image. First, we propose a single convolution neural network that simultaneously extracts dense local features and their corresponding descriptors. This network combines the advantages of a high repeatability local feature detector and high reliability local feature descriptors to match the reference image and real-time image with extreme appearance changes. Second, to make full use of the sparse dataset, an iterative training scheme is proposed to automatically generate the high-quality corresponding features for algorithm training. During the scheme, the dense correspondences are automatically extracted, and the geometric constraints are added to continuously improve the quality of them. With these improvements, the proposed method achieves state-of-the-art performance for infrared aerial (UAV captured) image and satellite reference image, which shows 4–6% performance improvements in precision, recall, and F1-score, compared to the other methods. Moreover, the applied experiment results show its potential and effectiveness on localization for UAVs navigation and trajectory reconstruction application.


2019 ◽  
Vol 127 ◽  
pp. 3-10 ◽  
Author(s):  
Hui Chen ◽  
Nan Xue ◽  
Yipeng Zhang ◽  
Qikai Lu ◽  
Gui-Song Xia

Sign in / Sign up

Export Citation Format

Share Document