similarity preserving
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 46)

H-INDEX

14
(FIVE YEARS 6)

2021 ◽  
Author(s):  
Peng-Fei Zhang ◽  
Pengfei Zhao ◽  
Xin Luo ◽  
Xin-Shun Xu

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257130
Author(s):  
Yang Li ◽  
Yuqing Sun ◽  
Nana Zhu

In recent years, text sentiment analysis has attracted wide attention, and promoted the rise and development of stance detection research. The purpose of stance detection is to determine the author’s stance (favor or against) towards a specific target or proposition in the text. Pre-trained language models like BERT have been proven to perform well in this task. However, in many reality scenes, they are usually very expensive in computation, because such heavy models are difficult to implement with limited resources. To improve the efficiency while ensuring the performance, we propose a knowledge distillation model BERTtoCNN, which combines the classic distillation loss and similarity-preserving loss in a joint knowledge distillation framework. On the one hand, BERTtoCNN provides an efficient distillation process to train a novel ‘student’ CNN structure from a much larger ‘teacher’ language model BERT. On the other hand, based on the similarity-preserving loss function, BERTtoCNN guides the training of a student network, so that input pairs with similar (dissimilar) activation in the teacher network have similar (dissimilar) activation in the student network. We conduct experiments and test the proposed model on the open Chinese and English stance detection datasets. The experimental results show that our model outperforms the competitive baseline methods obviously.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-22
Author(s):  
Zheng Zhang ◽  
Xiaofeng Zhu ◽  
Guangming Lu ◽  
Yudong Zhang

Semantic hashing enables computation and memory-efficient image retrieval through learning similarity-preserving binary representations. Most existing hashing methods mainly focus on preserving the piecewise class information or pairwise correlations of samples into the learned binary codes while failing to capture the mutual triplet-level ordinal structure in similarity preservation. In this article, we propose a novel Probability Ordinal-preserving Semantic Hashing (POSH) framework, which for the first time defines the ordinal-preserving hashing concept under a non-parametric Bayesian theory. Specifically, we derive the whole learning framework of the ordinal similarity-preserving hashing based on the maximum posteriori estimation, where the probabilistic ordinal similarity preservation, probabilistic quantization function, and probabilistic semantic-preserving function are jointly considered into one unified learning framework. In particular, the proposed triplet-ordering correlation preservation scheme can effectively improve the interpretation of the learned hash codes under an economical anchor-induced asymmetric graph learning model. Moreover, the sparsity-guided selective quantization function is designed to minimize the loss of space transformation, and the regressive semantic function is explored to promote the flexibility of the formulated semantics in hash code learning. The final joint learning objective is formulated to concurrently preserve the ordinal locality of original data and explore potentials of semantics for producing discriminative hash codes. Importantly, an efficient alternating optimization algorithm with the strictly proof convergence guarantee is developed to solve the resulting objective problem. Extensive experiments on several large-scale datasets validate the superiority of the proposed method against state-of-the-art hashing-based retrieval methods.


2021 ◽  
Vol 116 ◽  
pp. 275-290
Author(s):  
Chaobo He ◽  
Hai Liu ◽  
Yong Tang ◽  
Shuangyin Liu ◽  
Xiang Fei ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document