Study on Product Name Disambiguation Method Based on Fusion Feature Similarity

Author(s):  
Xiuli Ning ◽  
Xiaowei Lu ◽  
Yingcheng Xu ◽  
Ying Li
1973 ◽  
Author(s):  
W. L. Gulick ◽  
W. M. Youngs ◽  
John P. Galla
Keyword(s):  

2013 ◽  
Vol 32 (9) ◽  
pp. 2488-2490
Author(s):  
Xin-xin YANG ◽  
Pei-feng LI ◽  
Qiao-ming ZHU

Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 403
Author(s):  
Xun Zhang ◽  
Lanyan Yang ◽  
Bin Zhang ◽  
Ying Liu ◽  
Dong Jiang ◽  
...  

The problem of extracting meaningful data through graph analysis spans a range of different fields, such as social networks, knowledge graphs, citation networks, the World Wide Web, and so on. As increasingly structured data become available, the importance of being able to effectively mine and learn from such data continues to grow. In this paper, we propose the multi-scale aggregation graph neural network based on feature similarity (MAGN), a novel graph neural network defined in the vertex domain. Our model provides a simple and general semi-supervised learning method for graph-structured data, in which only a very small part of the data is labeled as the training set. We first construct a similarity matrix by calculating the similarity of original features between all adjacent node pairs, and then generate a set of feature extractors utilizing the similarity matrix to perform multi-scale feature propagation on graphs. The output of multi-scale feature propagation is finally aggregated by using the mean-pooling operation. Our method aims to improve the model representation ability via multi-scale neighborhood aggregation based on feature similarity. Extensive experimental evaluation on various open benchmarks shows the competitive performance of our method compared to a variety of popular architectures.


2020 ◽  
Vol 79 (37-38) ◽  
pp. 27057-27074 ◽  
Author(s):  
Qiang Gao ◽  
Chu-han Wang ◽  
Zhe Wang ◽  
Xiao-lin Song ◽  
En-zeng Dong ◽  
...  

Author(s):  
Reinald Kim Amplayo ◽  
Seung-won Hwang ◽  
Min Song

Word sense induction (WSI), or the task of automatically discovering multiple senses or meanings of a word, has three main challenges: domain adaptability, novel sense detection, and sense granularity flexibility. While current latent variable models are known to solve the first two challenges, they are not flexible to different word sense granularities, which differ very much among words, from aardvark with one sense, to play with over 50 senses. Current models either require hyperparameter tuning or nonparametric induction of the number of senses, which we find both to be ineffective. Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word. These observations alleviate the problem by (a) throwing garbage senses and (b) additionally inducing fine-grained word senses. Results show great improvements over the stateof-the-art models on popular WSI datasets. We also show that AutoSense is able to learn the appropriate sense granularity of a word. Finally, we apply AutoSense to the unsupervised author name disambiguation task where the sense granularity problem is more evident and show that AutoSense is evidently better than competing models. We share our data and code here: https://github.com/rktamplayo/AutoSense.


Author(s):  
Bo Chen ◽  
Jing Zhang ◽  
Jie Tang ◽  
Lingfan Cai ◽  
Zhaoyu Wang ◽  
...  
Keyword(s):  

2010 ◽  
Vol 61 (9) ◽  
pp. 1853-1870 ◽  
Author(s):  
Ricardo G. Cota ◽  
Anderson A. Ferreira ◽  
Cristiano Nascimento ◽  
Marcos André Gonçalves ◽  
Alberto H. F. Laender

PLoS ONE ◽  
2017 ◽  
Vol 12 (4) ◽  
pp. e0175798
Author(s):  
Yang Song ◽  
Mei Yu ◽  
Gangyi Jiang ◽  
Feng Shao ◽  
Zongju Peng

Sign in / Sign up

Export Citation Format

Share Document