scholarly journals Cross-Domain Image Matching with Deep Feature Maps

2019 ◽  
Vol 127 (11-12) ◽  
pp. 1738-1750 ◽  
Author(s):  
Bailey Kong ◽  
James Supanc̆ic̆ ◽  
Deva Ramanan ◽  
Charless C. Fowlkes
Author(s):  
M. Chen ◽  
Y. Zhao ◽  
T. Fang ◽  
Q. Zhu ◽  
S. Yan ◽  
...  

Abstract. Image matching is a fundamental issue of multimodal images fusion. Most of recent researches only focus on the non-linear radiometric distortion on coarsely registered multimodal images. The global geometric distortion between images should be eliminated based on prior information (e.g. direct geo-referencing information and ground sample distance) before using these methods to find correspondences. However, the prior information is not always available or accurate enough. In this case, users have to select some ground control points manually to do image registration and make the methods work. Otherwise, these methods will fail. To overcome this problem, we propose a robust deep learning-based multimodal image matching method that can deal with geometric and non-linear radiometric distortion simultaneously by exploiting deep feature maps. It is observed in our study that some of the deep feature maps have similar grayscale distribution and correspondences can be found from these maps using traditional geometric distortion robust matching methods even significant non-linear radiometric difference exists between the original images. Therefore, we can only focus on the geometric distortion when we deal with deep feature maps, and then only focus on non-linear radiometric distortion in patches similarity measurement. The experimental results demonstrate that the proposed method performs better than the state-of-the-art matching methods on multimodal images with both geometric and non-linear radiometric distortion.


2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


2019 ◽  
Vol 11 (19) ◽  
pp. 2243 ◽  
Author(s):  
Weiquan Liu ◽  
Cheng Wang ◽  
Xuesheng Bian ◽  
Shuting Chen ◽  
Wei Li ◽  
...  

Establishing the spatial relationship between 2D images captured by real cameras and 3D models of the environment (2D and 3D space) is one way to achieve the virtual–real registration for Augmented Reality (AR) in outdoor environments. In this paper, we propose to match the 2D images captured by real cameras and the rendered images from the 3D image-based point cloud to indirectly establish the spatial relationship between 2D and 3D space. We call these two kinds of images as cross-domain images, because their imaging mechanisms and nature are quite different. However, unlike real camera images, the rendered images from the 3D image-based point cloud are inevitably contaminated with image distortion, blurred resolution, and obstructions, which makes image matching with the handcrafted descriptors or existing feature learning neural networks very challenging. Thus, we first propose a novel end-to-end network, AE-GAN-Net, consisting of two AutoEncoders (AEs) with Generative Adversarial Network (GAN) embedding, to learn invariant feature descriptors for cross-domain image matching. Second, a domain-consistent loss function, which balances image content and consistency of feature descriptors for cross-domain image pairs, is introduced to optimize AE-GAN-Net. AE-GAN-Net effectively captures domain-specific information, which is embedded into the learned feature descriptors, thus making the learned feature descriptors robust against image distortion, variations in viewpoints, spatial resolutions, rotation, and scaling. Experimental results show that AE-GAN-Net achieves state-of-the-art performance for image patch retrieval with the cross-domain image patch dataset, which is built from real camera images and the rendered images from 3D image-based point cloud. Finally, by evaluating virtual–real registration for AR on a campus by using the cross-domain image matching results, we demonstrate the feasibility of applying the proposed virtual–real registration to AR in outdoor environments.


Author(s):  
Archith John Bency ◽  
Heesung Kwon ◽  
Hyungtae Lee ◽  
S. Karthikeyan ◽  
B. S. Manjunath

2011 ◽  
Vol 30 (6) ◽  
pp. 1-10 ◽  
Author(s):  
Abhinav Shrivastava ◽  
Tomasz Malisiewicz ◽  
Abhinav Gupta ◽  
Alexei A. Efros

2021 ◽  
Author(s):  
Mohit Dandekar ◽  
Narinder Singh Punn ◽  
Sanjay Kumar Sonbhadra ◽  
Sonali Agarwal ◽  
Rage Uday Kiran
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document