Gated Relational Graph Neural Network for Semi-supervised Learning on Knowledge Graphs

Author(s):  
Yuyan Chen ◽  
Lei Zou ◽  
Zongyue Qin
Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 403
Author(s):  
Xun Zhang ◽  
Lanyan Yang ◽  
Bin Zhang ◽  
Ying Liu ◽  
Dong Jiang ◽  
...  

The problem of extracting meaningful data through graph analysis spans a range of different fields, such as social networks, knowledge graphs, citation networks, the World Wide Web, and so on. As increasingly structured data become available, the importance of being able to effectively mine and learn from such data continues to grow. In this paper, we propose the multi-scale aggregation graph neural network based on feature similarity (MAGN), a novel graph neural network defined in the vertex domain. Our model provides a simple and general semi-supervised learning method for graph-structured data, in which only a very small part of the data is labeled as the training set. We first construct a similarity matrix by calculating the similarity of original features between all adjacent node pairs, and then generate a set of feature extractors utilizing the similarity matrix to perform multi-scale feature propagation on graphs. The output of multi-scale feature propagation is finally aggregated by using the mean-pooling operation. Our method aims to improve the model representation ability via multi-scale neighborhood aggregation based on feature similarity. Extensive experimental evaluation on various open benchmarks shows the competitive performance of our method compared to a variety of popular architectures.


2021 ◽  
Vol 11 (15) ◽  
pp. 7104
Author(s):  
Xu Yang ◽  
Ziyi Huan ◽  
Yisong Zhai ◽  
Ting Lin

Nowadays, personalized recommendation based on knowledge graphs has become a hot spot for researchers due to its good recommendation effect. In this paper, we researched personalized recommendation based on knowledge graphs. First of all, we study the knowledge graphs’ construction method and complete the construction of the movie knowledge graphs. Furthermore, we use Neo4j graph database to store the movie data and vividly display it. Then, the classical translation model TransE algorithm in knowledge graph representation learning technology is studied in this paper, and we improved the algorithm through a cross-training method by using the information of the neighboring feature structures of the entities in the knowledge graph. Furthermore, the negative sampling process of TransE algorithm is improved. The experimental results show that the improved TransE model can more accurately vectorize entities and relations. Finally, this paper constructs a recommendation model by combining knowledge graphs with ranking learning and neural network. We propose the Bayesian personalized recommendation model based on knowledge graphs (KG-BPR) and the neural network recommendation model based on knowledge graphs(KG-NN). The semantic information of entities and relations in knowledge graphs is embedded into vector space by using improved TransE method, and we compare the results. The item entity vectors containing external knowledge information are integrated into the BPR model and neural network, respectively, which make up for the lack of knowledge information of the item itself. Finally, the experimental analysis is carried out on MovieLens-1M data set. The experimental results show that the two recommendation models proposed in this paper can effectively improve the accuracy, recall, F1 value and MAP value of recommendation.


2020 ◽  
Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>


Author(s):  
Navin Tatyaba Gopal ◽  
Anish Raj Khobragade

The Knowledge graphs (KGs) catches structured data and relationships among a bunch of entities and items. Generally, constitute an attractive origin of information that can advance the recommender systems. But, present methodologies of this area depend on manual element thus don’t permit for start to end training. This article proposes, Knowledge Graph along with Label Smoothness (KG-LS) to offer better suggestions for the recommender Systems. Our methodology processes user-specific entities by prior application of a function capability that recognizes key KG-relationships for a specific user. In this manner, we change the KG in a specific-user weighted graph followed by application of a graph neural network to process customized entity embedding. To give better preliminary predisposition, label smoothness comes into picture, which places items in the KG which probably going to have identical user significant names/scores. Use of, label smoothness gives regularization above the edge weights thus; we demonstrate that it is comparable to a label propagation plan on the graph. Additionally building-up a productive usage that symbolizes solid adaptability concerning the size of knowledge graph. Experimentation on 4 datasets shows that our strategy beats best in class baselines. This process likewise accomplishes solid execution in cold start situations where user-entity communications remain meager.


2021 ◽  
Author(s):  
Long Ngo Hoang Truong ◽  
Edward Clay ◽  
Omar E. Mora ◽  
Wen Cheng ◽  
Maninder Kaur ◽  
...  

Author(s):  
Shaolei Wang ◽  
Zhongyuan Wang ◽  
Wanxiang Che ◽  
Sendong Zhao ◽  
Ting Liu

Spoken language is fundamentally different from the written language in that it contains frequent disfluencies or parts of an utterance that are corrected by the speaker. Disfluency detection (removing these disfluencies) is desirable to clean the input for use in downstream NLP tasks. Most existing approaches to disfluency detection heavily rely on human-annotated data, which is scarce and expensive to obtain in practice. To tackle the training data bottleneck, in this work, we investigate methods for combining self-supervised learning and active learning for disfluency detection. First, we construct large-scale pseudo training data by randomly adding or deleting words from unlabeled data and propose two self-supervised pre-training tasks: (i) a tagging task to detect the added noisy words and (ii) sentence classification to distinguish original sentences from grammatically incorrect sentences. We then combine these two tasks to jointly pre-train a neural network. The pre-trained neural network is then fine-tuned using human-annotated disfluency detection training data. The self-supervised learning method can capture task-special knowledge for disfluency detection and achieve better performance when fine-tuning on a small annotated dataset compared to other supervised methods. However, limited in that the pseudo training data are generated based on simple heuristics and cannot fully cover all the disfluency patterns, there is still a performance gap compared to the supervised models trained on the full training dataset. We further explore how to bridge the performance gap by integrating active learning during the fine-tuning process. Active learning strives to reduce annotation costs by choosing the most critical examples to label and can address the weakness of self-supervised learning with a small annotated dataset. We show that by combining self-supervised learning with active learning, our model is able to match state-of-the-art performance with just about 10% of the original training data on both the commonly used English Switchboard test set and a set of in-house annotated Chinese data.


Author(s):  
Xiao Liu ◽  
Li Mian ◽  
Yuxiao Dong ◽  
Fanjin Zhang ◽  
Jing Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document