scholarly journals Edge-Guided Cross-Domain Learning with Shape Regression for Sketch-Based Image Retrieval

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 32393-32399 ◽  
Author(s):  
Yuxin Song ◽  
Jianjun Lei ◽  
Bo Peng ◽  
Kaifu Zheng ◽  
Bolan Yang ◽  
...  
Author(s):  
Jianjun Lei ◽  
Kaifu Zheng ◽  
Hua Zhang ◽  
Xiaochun Cao ◽  
Nam Ling ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Haopeng Lei ◽  
Simin Chen ◽  
Mingwen Wang ◽  
Xiangjian He ◽  
Wenjing Jia ◽  
...  

Due to the rise of e-commerce platforms, online shopping has become a trend. However, the current mainstream retrieval methods are still limited to using text or exemplar images as input. For huge commodity databases, it remains a long-standing unsolved problem for users to find the interested products quickly. Different from the traditional text-based and exemplar-based image retrieval techniques, sketch-based image retrieval (SBIR) provides a more intuitive and natural way for users to specify their search need. Due to the large cross-domain discrepancy between the free-hand sketch and fashion images, retrieving fashion images by sketches is a significantly challenging task. In this work, we propose a new algorithm for sketch-based fashion image retrieval based on cross-domain transformation. In our approach, the sketch and photo are first transformed into the same domain. Then, the sketch domain similarity and the photo domain similarity are calculated, respectively, and fused to improve the retrieval accuracy of fashion images. Moreover, the existing fashion image datasets mostly contain photos only and rarely contain the sketch-photo pairs. Thus, we contribute a fine-grained sketch-based fashion image retrieval dataset, which includes 36,074 sketch-photo pairs. Specifically, when retrieving on our Fashion Image dataset, the accuracy of our model ranks the correct match at the top-1 which is 96.6%, 92.1%, 91.0%, and 90.5% for clothes, pants, skirts, and shoes, respectively. Extensive experiments conducted on our dataset and two fine-grained instance-level datasets, i.e., QMUL-shoes and QMUL-chairs, show that our model has achieved a better performance than other existing methods.


2020 ◽  
Vol 34 (07) ◽  
pp. 11386-11393 ◽  
Author(s):  
Shuang Li ◽  
Chi Liu ◽  
Qiuxia Lin ◽  
Binhui Xie ◽  
Zhengming Ding ◽  
...  

Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three cross-domain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.


2014 ◽  
Vol 22 (4) ◽  
pp. 395-404 ◽  
Author(s):  
Weizhi Nie ◽  
Anan Liu ◽  
Zhongyang Wang ◽  
Yuting Su

2009 ◽  
Vol 13 (3) ◽  
pp. 236-253 ◽  
Author(s):  
Depin Chen ◽  
Yan Xiong ◽  
Jun Yan ◽  
Gui-Rong Xue ◽  
Gang Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document