scholarly journals Data-driven visual similarity for cross-domain image matching

Author(s):  
Abhinav Shrivastava ◽  
Tomasz Malisiewicz ◽  
Abhinav Gupta ◽  
Alexei A. Efros
2011 ◽  
Vol 30 (6) ◽  
pp. 1-10 ◽  
Author(s):  
Abhinav Shrivastava ◽  
Tomasz Malisiewicz ◽  
Abhinav Gupta ◽  
Alexei A. Efros

2019 ◽  
Vol 11 (19) ◽  
pp. 2243 ◽  
Author(s):  
Weiquan Liu ◽  
Cheng Wang ◽  
Xuesheng Bian ◽  
Shuting Chen ◽  
Wei Li ◽  
...  

Establishing the spatial relationship between 2D images captured by real cameras and 3D models of the environment (2D and 3D space) is one way to achieve the virtual–real registration for Augmented Reality (AR) in outdoor environments. In this paper, we propose to match the 2D images captured by real cameras and the rendered images from the 3D image-based point cloud to indirectly establish the spatial relationship between 2D and 3D space. We call these two kinds of images as cross-domain images, because their imaging mechanisms and nature are quite different. However, unlike real camera images, the rendered images from the 3D image-based point cloud are inevitably contaminated with image distortion, blurred resolution, and obstructions, which makes image matching with the handcrafted descriptors or existing feature learning neural networks very challenging. Thus, we first propose a novel end-to-end network, AE-GAN-Net, consisting of two AutoEncoders (AEs) with Generative Adversarial Network (GAN) embedding, to learn invariant feature descriptors for cross-domain image matching. Second, a domain-consistent loss function, which balances image content and consistency of feature descriptors for cross-domain image pairs, is introduced to optimize AE-GAN-Net. AE-GAN-Net effectively captures domain-specific information, which is embedded into the learned feature descriptors, thus making the learned feature descriptors robust against image distortion, variations in viewpoints, spatial resolutions, rotation, and scaling. Experimental results show that AE-GAN-Net achieves state-of-the-art performance for image patch retrieval with the cross-domain image patch dataset, which is built from real camera images and the rendered images from 3D image-based point cloud. Finally, by evaluating virtual–real registration for AR on a campus by using the cross-domain image matching results, we demonstrate the feasibility of applying the proposed virtual–real registration to AR in outdoor environments.


Author(s):  
Jose A. Rodriguez-Serrano ◽  
Harsimrat Sandhawalia ◽  
Raja Bala ◽  
Florent Perronnin ◽  
Craig Saunders

Author(s):  
Conrad S. Tucker ◽  
Sung Woo Kang

The Bisociative Design framework proposed in this work aims to quantify hidden, previously unknown design synergies/insights across seemingly unrelated product domains. Despite the overabundance of data characterizing the digital age, designers still face tremendous challenges in transforming data into knowledge throughout the design processes. Data driven methodologies play a significant role in the product design process ranging from customer preference modeling to detailed engineering design. Existing data driven methodologies employed in the design community generate mathematical models based on data relating to a specific domain and are therefore constrained in their ability to discover novel design insights beyond the domain itself (I.e., cross domain knowledge). The Bisociative Design framework proposed in this work overcomes the limitations of current data driven design methodologies by decomposing design artifacts into form patterns, function patterns and behavior patterns and then evaluating potential cross-domain design insights through a proposed multidimensional Bisociative Design metric. A hybrid marine model involving multiple domains (capable of flight and marine navigation) is used as a case study to demonstrate the proposed Bisociative Design framework and explain how associations and novel design models can be generated through the discovery of hidden, previously unknown patterns across multiple, unrelated domains.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 17681-17698 ◽  
Author(s):  
Jing Li ◽  
Congcong Li ◽  
Tao Yang ◽  
Zhaoyang Lu

2020 ◽  
Vol 79 (43-44) ◽  
pp. 32807-32831
Author(s):  
Meenakshi Choudhary ◽  
Vivek Tiwari ◽  
U Venkanna

Sign in / Sign up

Export Citation Format

Share Document