learning to hash
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 19)

H-INDEX

12
(FIVE YEARS 1)

Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 285
Author(s):  
Wenjing Yang ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li ◽  
Anyu Du

Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to solve these problems, this paper proposes deep hash with improved dual attention for image retrieval (DHIDA), which chiefly has the following contents: (1) this paper introduces the improved dual attention mechanism (IDA) based on the ResNet18 pre-trained module to extract the feature information of the image, which consists of the position attention module and the channel attention module; (2) when calculating the spatial attention matrix and channel attention matrix, the average value and maximum value of the column of the feature map matrix are integrated in order to promote the feature representation ability and fully leverage the features of each position; and (3) to reduce quantization error, this study designs a new piecewise function to directly guide the discrete binary code. Experiments on CIFAR-10, NUS-WIDE and ImageNet-100 show that the DHIDA algorithm achieves better performance.


2021 ◽  
Author(s):  
Zhuangping Qi ◽  
Lei Liu ◽  
Huijie Liu ◽  
Li Li ◽  
Hua Gao

Author(s):  
Zhanxuan Hu ◽  
Shuzheng Hao ◽  
Feiping Nie ◽  
Rong Wang ◽  
Xuelong Li
Keyword(s):  

2020 ◽  
Vol 58 (10) ◽  
pp. 7331-7345 ◽  
Author(s):  
Peng Li ◽  
Lirong Han ◽  
Xuanwen Tao ◽  
Xiaoyu Zhang ◽  
Christos Grecos ◽  
...  

2020 ◽  
Vol 1631 ◽  
pp. 012029
Author(s):  
Shiyuan Fang ◽  
Jianfeng Wang ◽  
Cheng Yang ◽  
Pengpeng Tong

Author(s):  
Lixin Fan ◽  
Kam Woh Ng ◽  
Ce Ju ◽  
Tianyu Zhang ◽  
Chee Seng Chan

This paper proposes a novel deep polarized network (DPN) for learning to hash, in which each channel in the network outputs is pushed far away from zero by employing a differentiable bit-wise hinge-like loss which is dubbed as polarization loss. Reformulated within a generic Hamming Distance Metric Learning framework [Norouzi et al., 2012], the proposed polarization loss bypasses the requirement to prepare pairwise labels for (dis-)similar items and, yet, the proposed loss strictly bounds from above the pairwise Hamming Distance based losses. The intrinsic connection between pairwise and pointwise label information, as disclosed in this paper, brings about the following methodological improvements: (a) we may directly employ the proposed differentiable polarization loss with no large deviations incurred from the target Hamming distance based loss; and (b) the subtask of assigning binary codes becomes extremely simple --- even random codes assigned to each class suffice to result in state-of-the-art performances, as demonstrated in CIFAR10, NUS-WIDE and ImageNet100 datasets.


Author(s):  
Qiaoyu Tan ◽  
Ninghao Liu ◽  
Xing Zhao ◽  
Hongxia Yang ◽  
Jingren Zhou ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yanduo Ren ◽  
Jiangbo Qian ◽  
Yihong Dong ◽  
Yu Xin ◽  
Huahui Chen

Nearest neighbour search (NNS) is the core of large data retrieval. Learning to hash is an effective way to solve the problems by representing high-dimensional data into a compact binary code. However, existing learning to hash methods needs long bit encoding to ensure the accuracy of query, and long bit encoding brings large cost of storage, which severely restricts the long bit encoding in the application of big data. An asymmetric learning to hash with variable bit encoding algorithm (AVBH) is proposed to solve the problem. The AVBH hash algorithm uses two types of hash mapping functions to encode the dataset and the query set into different length bits. For datasets, the hash code frequencies of datasets after random Fourier feature encoding are statistically analysed. The hash code with high frequency is compressed into a longer coding representation, and the hash code with low frequency is compressed into a shorter coding representation. The query point is quantized to a long bit hash code and compared with the same length cascade concatenated data point. Experiments on public datasets show that the proposed algorithm effectively reduces the cost of storage and improves the accuracy of query.


Author(s):  
Zhiyong Su ◽  
Liang Yao ◽  
Jialin Mei ◽  
Lang Zhou ◽  
Weiqing Li

Sign in / Sign up

Export Citation Format

Share Document