Large-scale visual search and similarity for e-commerce

Author(s):  
Gaurav Anand ◽  
Siyun Wang ◽  
Karl Ni
Keyword(s):  
2018 ◽  
Vol 20 (10) ◽  
pp. 2774-2787 ◽  
Author(s):  
Feng Gao ◽  
Xinfeng Zhang ◽  
Yicheng Huang ◽  
Yong Luo ◽  
Xiaoming Li ◽  
...  

2013 ◽  
Vol 461 ◽  
pp. 792-800
Author(s):  
Bo Zhao ◽  
Hong Wei Zhao ◽  
Ping Ping Liu ◽  
Gui He Qin

We describe a novel mobile visual search system based on the saliencymechanism and sparse coding principle of the human visual system (HVS). In the featureextraction step, we first divide an image into different regions using thesaliency extraction algorithm. Then scale-invariant feature transform (SIFT)descriptors in all regions are extracted while regional identities arepreserved based on their various saliency levels. According to the sparsecoding principle in the HVS, we adopt a local neighbor preserving Hash functionto establish the binary sparse expression of the SIFT features. In the searchingstep, the nearest neighbors matched to the hashing codes are processed accordingto different saliency levels. Matching scores of images in the database arederived from the matching of hashing codes. Subsequently, the matching scoresof all levels are weighed by degrees of saliency to obtain the initial set of results. In order to further ensure matching accuracy, we propose an optimized retrieval scheme based on global texture information. We conduct extensive experiments on an actual mobile platform in large-scale datasets by using Corel-1000. The resultsshow that the proposed method outperforms the state-of-the-art algorithms on accuracyrate, and no significant increase in the running time of the feature extractionand retrieval can be observed.


2017 ◽  
Vol 17 (10) ◽  
pp. 926
Author(s):  
Chia-Ling Li ◽  
M. Aivar ◽  
Matthew Tong ◽  
Mary Hayhoe

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Chia-Ling Li ◽  
M. Pilar Aivar ◽  
Matthew H. Tong ◽  
Mary M. Hayhoe

2012 ◽  
Vol 12 (9) ◽  
pp. 731-731
Author(s):  
G. Solman ◽  
D. Smilek
Keyword(s):  

2018 ◽  
Author(s):  
Marcella Frătescu ◽  
Dirk Van Moorselaar ◽  
Sebastiaan Mathôt

AbstractStimuli that resemble the content of visual working memory (VWM) capture attention. However, theories disagree on how many VWM items can bias attention simultaneously. The multiple-state account posits a distinction between template and accessory VWM items, such that only a single template item biases attention. In contrast, homogenous-state accounts posit that all VWM items bias attention. Recently, Van Moorselaar et al. (2014) and Hollingworth and Beck (2016) tested these accounts, but obtained seemingly contradictory results. Van Moorselaar et al. (2014) found that a distractor in a visual-search task captured attention more when it matched the content of VWM (memory-driven capture). Crucially, memory-driven capture disappeared when more than one item was held in VWM, in line with the multiple-state account. In contrast, Hollingworth and Beck (2016) found memory-driven capture even when multiple items were kept in VWM, in line with a homogenous-state account. Considering these mixed results, we replicated both studies with a larger sample, and found that all key results are reliable. It is unclear to what extent these divergent results are due to paradigm differences between the studies. We conclude that is crucial to our understanding of VWM to determine the boundary conditions under which memory-driven capture occurs.


Sign in / Sign up

Export Citation Format

Share Document