scholarly journals Cross-Domain Similarity Learning for Face Recognition in Unseen Domains

Author(s):  
Masoud Faraki ◽  
Xiang Yu ◽  
Yi-Hsuan Tsai ◽  
Yumin Suh ◽  
Manmohan Chandraker
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Haopeng Lei ◽  
Simin Chen ◽  
Mingwen Wang ◽  
Xiangjian He ◽  
Wenjing Jia ◽  
...  

Due to the rise of e-commerce platforms, online shopping has become a trend. However, the current mainstream retrieval methods are still limited to using text or exemplar images as input. For huge commodity databases, it remains a long-standing unsolved problem for users to find the interested products quickly. Different from the traditional text-based and exemplar-based image retrieval techniques, sketch-based image retrieval (SBIR) provides a more intuitive and natural way for users to specify their search need. Due to the large cross-domain discrepancy between the free-hand sketch and fashion images, retrieving fashion images by sketches is a significantly challenging task. In this work, we propose a new algorithm for sketch-based fashion image retrieval based on cross-domain transformation. In our approach, the sketch and photo are first transformed into the same domain. Then, the sketch domain similarity and the photo domain similarity are calculated, respectively, and fused to improve the retrieval accuracy of fashion images. Moreover, the existing fashion image datasets mostly contain photos only and rarely contain the sketch-photo pairs. Thus, we contribute a fine-grained sketch-based fashion image retrieval dataset, which includes 36,074 sketch-photo pairs. Specifically, when retrieving on our Fashion Image dataset, the accuracy of our model ranks the correct match at the top-1 which is 96.6%, 92.1%, 91.0%, and 90.5% for clothes, pants, skirts, and shoes, respectively. Extensive experiments conducted on our dataset and two fine-grained instance-level datasets, i.e., QMUL-shoes and QMUL-chairs, show that our model has achieved a better performance than other existing methods.


2018 ◽  
Vol 9 ◽  
Author(s):  
Anna Krasotkina ◽  
Antonia Götz ◽  
Barbara Höhle ◽  
Gudrun Schwarzer

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 50452-50464 ◽  
Author(s):  
Han Byeol Bae ◽  
Taejae Jeon ◽  
Yongju Lee ◽  
Sungjun Jang ◽  
Sangyoun Lee

2020 ◽  
Vol 28 (10) ◽  
pp. 2311-2322
Author(s):  
Yue MING ◽  
◽  
Shao-Ying WANG ◽  
Chun-Xiao FAN ◽  
Jiang-Wan ZHOU

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Wenhui Dong ◽  
Peishu Qu ◽  
Chunsheng Liu ◽  
Yanke Tang ◽  
Ning Gai

Author(s):  
Pradeep Kumar Singh ◽  
Pijush Kanti Dutta Pramanik ◽  
Samriddhi Mishra ◽  
Anand Nayyar ◽  
Divyanshu Shukla ◽  
...  

1988 ◽  
Vol 40 (3) ◽  
pp. 561-580 ◽  
Author(s):  
Andrew W. Young ◽  
Deborah Hellawell ◽  
Edward H. F. De Haan

Cross-domain semantic priming of person recognition (from face primes to name targets at 500msecs SOA) is investigated in normal subjects and a brain-injured patient (PH) with a very severe impairment of overt face recognition ability. Experiment 1 demonstrates equivalent semantic priming effects for normal subjects from face primes to name targets (cross-domain priming) and from name primes to name targets (within-domain priming). Experiment 2 demonstrates cross-domain semantic priming effects from face primes that PH cannot recognize overtly. Experiment 3 shows that cross-domain semantic priming effects can be found for normal subjects when target names are repeated across all conditions. This (repeated targets) method is then used in Experiment 4 to establish that PH shows equivalent semantic priming to normal subjects from face primes which he is very poor at identifying overtly and from name primes which he can identify overtly. These findings demonstrate that automatic aspects of face recognition can remain intact even when all sense of overt recognition has been lost.


Sign in / Sign up

Export Citation Format

Share Document