scholarly journals Stochastic Non-linear Hashing for Near-Duplicate Video Retrieval using Deep Feature applicable to Large-scale Datasets

2020 ◽  
Vol 14 (5) ◽  
Author(s):  
Ling Shen ◽  
Richang Hong ◽  
Yanbin Hao

2012 ◽  
Vol 14 (4) ◽  
pp. 1220-1233 ◽  
Author(s):  
Xiangmin Zhou ◽  
Lei Chen ◽  
Xiaofang Zhou

2013 ◽  
Vol 15 (8) ◽  
pp. 1997-2008 ◽  
Author(s):  
Jingkuan Song ◽  
Yi Yang ◽  
Zi Huang ◽  
Heng Tao Shen ◽  
Jiebo Luo

2014 ◽  
Vol 74 (23) ◽  
pp. 10515-10534 ◽  
Author(s):  
Hanli Wang ◽  
Fengkuangtian Zhu ◽  
Bo Xiao ◽  
Lei Wang ◽  
Yu-Gang Jiang

2017 ◽  
Vol 19 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Yanbin Hao ◽  
Tingting Mu ◽  
Richang Hong ◽  
Meng Wang ◽  
Ning An ◽  
...  

2020 ◽  
Vol 32 (10) ◽  
pp. 1951-1965 ◽  
Author(s):  
Xiushan Nie ◽  
Weizhen Jing ◽  
Chaoran Cui ◽  
Chen Jason Zhang ◽  
Lei Zhu ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Chen Zhang ◽  
Bin Hu ◽  
Yucong Suo ◽  
Zhiqiang Zou ◽  
Yimu Ji

In this paper, we study the challenge of image-to-video retrieval, which uses the query image to search relevant frames from a large collection of videos. A novel framework based on convolutional neural networks (CNNs) is proposed to perform large-scale video retrieval with low storage cost and high search efficiency. Our framework consists of the key-frame extraction algorithm and the feature aggregation strategy. Specifically, the key-frame extraction algorithm takes advantage of the clustering idea so that redundant information is removed in video data and storage cost is greatly reduced. The feature aggregation strategy adopts average pooling to encode deep local convolutional features followed by coarse-to-fine retrieval, which allows rapid retrieval in the large-scale video database. The results from extensive experiments on two publicly available datasets demonstrate that the proposed method achieves superior efficiency as well as accuracy over other state-of-the-art visual search methods.


Sign in / Sign up

Export Citation Format

Share Document