scholarly journals AutoSF: Searching Scoring Functions for Knowledge Graph Embedding

Author(s):  
Yongqi Zhang ◽  
Quanming Yao ◽  
Wenyuan Dai ◽  
Lei Chen
Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 147
Author(s):  
Sameh K. Mohamed ◽  
Emir Muñoz ◽  
Vit Novacek

Knowledge graph embedding (KGE) models have become popular means for making discoveries in knowledge graphs (e.g., RDF graphs) in an efficient and scalable manner. The key to success of these models is their ability to learn low-rank vector representations for knowledge graph entities and relations. Despite the rapid development of KGE models, state-of-the-art approaches have mostly focused on new ways to represent embeddings interaction functions (i.e., scoring functions). In this paper, we argue that the choice of other training components such as the loss function, hyperparameters and negative sampling strategies can also have substantial impact on the model efficiency. This area has been rather neglected by previous works so far and our contribution is towards closing this gap by a thorough analysis of possible choices of training loss functions, hyperparameters and negative sampling techniques. We finally investigate the effects of specific choices on the scalability and accuracy of knowledge graph embedding models.


Author(s):  
A-Yeong Kim ◽  
◽  
Hee-Guen Yoon ◽  
Seong-Bae Park ◽  
Se-Young Park ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


Author(s):  
Wei Song ◽  
Jingjin Guo ◽  
Ruiji Fu ◽  
Ting Liu ◽  
Lizhen Liu

2021 ◽  
pp. 107181
Author(s):  
Yao Chen ◽  
Jiangang Liu ◽  
Zhe Zhang ◽  
Shiping Wen ◽  
Wenjun Xiong

2021 ◽  
Author(s):  
Shensi Wang ◽  
Kun Fu ◽  
Xian Sun ◽  
Zequn Zhang ◽  
Shuchao Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document