LkeRec: Toward Lightweight End-to-End Joint Representation Learning for Building Accurate and Effective Recommendation

2022 ◽  
Vol 40 (3) ◽  
pp. 1-28
Author(s):  
Surong Yan ◽  
Kwei-Jay Lin ◽  
Xiaolin Zheng ◽  
Haosen Wang

Explicit and implicit knowledge about users and items have been used to describe complex and heterogeneous side information for recommender systems (RSs). Many existing methods use knowledge graph embedding (KGE) to learn the representation of a user-item knowledge graph (KG) in low-dimensional space. In this article, we propose a lightweight end-to-end joint learning framework for fusing the tasks of KGE and RSs at the model level. Our method proposes a lightweight KG embedding method by using bidirectional bijection relation-type modeling to enable scalability for large graphs while using self-adaptive negative sampling to optimize negative sample generating. Our method further generates the integrated views for users and items based on relation-types to explicitly model users’ preferences and items’ features, respectively. Finally, we add virtual “recommendation” relations between the integrated views of users and items to model the preferences of users on items, seamlessly integrating RS with user-item KG over a unified graph. Experimental results on multiple datasets and benchmarks show that our method can achieve a better accuracy of recommendation compared with existing state-of-the-art methods. Complexity and runtime analysis suggests that our method can gain a lower time and space complexity than most of existing methods and improve scalability.

Symmetry ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 392 ◽  
Author(s):  
Zhiying Cao ◽  
Xinghao Qiao ◽  
Shuo Jiang ◽  
Xiuguo Zhang

Using semantic information can help to accurately find suitable services from a variety of available (different semantics) services, and the semantic information of Web services can be described in detail in a Web service knowledge graph. In this paper, a Web service recommendation algorithm based on knowledge graph representation learning (kg-WSR) is proposed. The algorithm embeds the entities and relationships of the knowledge graph into the low-dimensional vector space. By calculating the distance between service entities in low-dimensional space, the relationship information of services which is not considered in recommendation approaches using a collaborative filtering algorithm is incorporated into the recommendation algorithm to enhance the accurateness of the result. The experimental results show that this algorithm can not only effectively improve the accuracy rate, recall rate, and coverage rate of recommendation but also solve the cold start problem to some extent.


2019 ◽  
Vol 15 (3) ◽  
pp. 346-358
Author(s):  
Luciano Barbosa

Purpose Matching instances of the same entity, a task known as entity resolution, is a key step in the process of data integration. This paper aims to propose a deep learning network that learns different representations of Web entities for entity resolution. Design/methodology/approach To match Web entities, the proposed network learns the following representations of entities: embeddings, which are vector representations of the words in the entities in a low-dimensional space; convolutional vectors from a convolutional layer, which capture short-distance patterns in word sequences in the entities; and bag-of-word vectors, created by a bow layer that learns weights for words in the vocabulary based on the task at hand. Given a pair of entities, the similarity between their learned representations is used as a feature to a binary classifier that identifies a possible match. In addition to those features, the classifier also uses a modification of inverse document frequency for pairs, which identifies discriminative words in pairs of entities. Findings The proposed approach was evaluated in two commercial and two academic entity resolution benchmarking data sets. The results have shown that the proposed strategy outperforms previous approaches in the commercial data sets, which are more challenging, and have similar results to its competitors in the academic data sets. Originality/value No previous work has used a single deep learning framework to learn different representations of Web entities for entity resolution.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Luogeng Tian ◽  
Bailong Yang ◽  
Xinli Yin ◽  
Kai Kang ◽  
Jing Wu

In the past, most of the entity prediction methods based on embedding lacked the training of local core relationships, resulting in a deficiency in the end-to-end training. Aiming at this problem, we propose an end-to-end knowledge graph embedding representation method. It involves local graph convolution and global cross learning in this paper, which is called the TransC graph convolutional network (TransC-GCN). Firstly, multiple local semantic spaces are divided according to the largest neighbor. Secondly, a translation model is used to map the local entities and relationships into a cross vector, which serves as the input of GCN. Thirdly, through training and learning of local semantic relations, the best entities and strongest relations are found. The optimal entity relation combination ranking is obtained by evaluating the posterior loss function based on the mutual information entropy. Experiments show that this paper can obtain local entity feature information more accurately through the convolution operation of the lightweight convolutional neural network. Also, the maximum pooling operation helps to grasp the strong signal on the local feature, thereby avoiding the globally redundant feature. Compared with the mainstream triad prediction baseline model, the proposed algorithm can effectively reduce the computational complexity while achieving strong robustness. It also increases the inference accuracy of entities and relations by 8.1% and 4.4%, respectively. In short, this new method can not only effectively extract the local nodes and relationship features of the knowledge graph but also satisfy the requirements of multilayer penetration and relationship derivation of a knowledge graph.


Author(s):  
Yuhan Wang ◽  
Weidong Xiao ◽  
Zhen Tan ◽  
Xiang Zhao

AbstractKnowledge graphs are typical multi-relational structures, which is consisted of many entities and relations. Nonetheless, existing knowledge graphs are still sparse and far from being complete. To refine the knowledge graphs, representation learning is utilized to embed entities and relations into low-dimensional spaces. Many existing knowledge graphs embedding models focus on learning latent features in close-world assumption but omit the changeable of each knowledge graph.In this paper, we propose a knowledge graph representation learning model, called Caps-OWKG, which leverages the capsule network to capture the both known and unknown triplets features in open-world knowledge graph. It combines the descriptive text and knowledge graph to get descriptive embedding and structural embedding, simultaneously. Then, the both above embeddings are used to calculate the probability of triplet authenticity. We verify the performance of Caps-OWKG on link prediction task with two common datasets FB15k-237-OWE and DBPedia50k. The experimental results are better than other baselines, and achieve the state-of-the-art performance.


2018 ◽  
Author(s):  
Peng Xie ◽  
Mingxuan Gao ◽  
Chunming Wang ◽  
Pawan Noel ◽  
Chaoyong Yang ◽  
...  

AbstractCharacterization of individual cell types is fundamental to the study of multicellular samples such as tumor tissues. Single-cell RNAseq techniques, which allow high-throughput expression profiling of individual cells, have significantly advanced our ability of this task. Currently, most of the scRNA-seq data analyses are commenced with unsupervised clustering of cells followed by visualization of clusters in a low-dimensional space. Clusters are often assigned to different cell types based on canonical markers. However, the efficiency of characterizing the known cell types in this way is low and limited by the investigator[s] knowledge. In this study, we present a technical framework of training the expandable supervised-classifier in order to reveal the single-cell identities based on their RNA expression profiles. Using multiple scRNA-seq datasets we demonstrate the superior accuracy, robustness, compatibility and expandability of this new solution compared to the traditional methods. We use two examples of model upgrade to demonstrate how the projected evolution of the cell-type classifier is realized.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1767
Author(s):  
Xin Xu ◽  
Yang Lu ◽  
Yupeng Zhou ◽  
Zhiguo Fu ◽  
Yanjie Fu ◽  
...  

Network representation learning aims to learn low-dimensional, compressible, and distributed representational vectors of nodes in networks. Due to the expensive costs of obtaining label information of nodes in networks, many unsupervised network representation learning methods have been proposed, where random walk strategy is one of the wildly utilized approaches. However, the existing random walk based methods have some challenges, including: 1. The insufficiency of explaining what network knowledge in the walking path-samplings; 2. The adverse effects caused by the mixture of different information in networks; 3. The poor generality of the methods with hyper-parameters on different networks. This paper proposes an information-explainable random walk based unsupervised network representation learning framework named Probabilistic Accepted Walk (PAW) to obtain network representation from the perspective of the stationary distribution of networks. In the framework, we design two stationary distributions based on nodes’ self-information and local-information of networks to guide our proposed random walk strategy to learn representational vectors of networks through sampling paths of nodes. Numerous experimental results demonstrated that the PAW could obtain more expressive representation than the other six widely used unsupervised network representation learning baselines on four real-world networks in single-label and multi-label node classification tasks.


2020 ◽  
pp. 114164
Author(s):  
Adnan Zeb ◽  
Anwar Ul Haq ◽  
Defu Zhang ◽  
Junde Chen ◽  
Zhiguo Gong

2021 ◽  
Vol 21 (3) ◽  
pp. 1-15
Author(s):  
Guangwei Gao ◽  
Dong Zhu ◽  
Huimin Lu ◽  
Yi Yu ◽  
Heyou Chang ◽  
...  

Super-resolution methods for facial image via representation learning scheme have become very effective methods due to their efficiency. The key problem for the super-resolution of facial image is to reveal the latent relationship between the low-resolution ( LR ) and the corresponding high-resolution ( HR ) training patch pairs. To simultaneously utilize the contextual information of the target position and the manifold structure of the primitive HR space, in this work, we design a robust context-patch facial image super-resolution scheme via a kernel locality-constrained coupled-layer regression (KLC2LR) scheme to obtain the desired HR version from the acquired LR image. Here, KLC2LR proposes to acquire contextual surrounding patches to represent the target patch and adds an HR layer constraint to compensate the detail information. Additionally, KLC2LR desires to acquire more high-frequency information by searching for nearest neighbors in the HR sample space. We also utilize kernel function to map features in original low-dimensional space into a high-dimensional one to obtain potential nonlinear characteristics. Our compared experiments in the noisy and noiseless cases have verified that our suggested methodology performs better than many existing predominant facial image super-resolution methods.


2021 ◽  
Vol 24 (4-5) ◽  
pp. 347-369
Author(s):  
Zaiqiao Meng ◽  
Richard McCreadie ◽  
Craig Macdonald ◽  
Iadh Ounis

AbstractRepresentation learning has been widely applied in real-world recommendation systems to capture the features of both users and items. Existing grocery recommendation methods only represent each user and item by single deterministic points in a low-dimensional continuous space, which limit the expressive ability of their embeddings, resulting in recommendation performance bottlenecks. In addition, existing representation learning methods for grocery recommendation only consider the items (products) as independent entities, neglecting their other valuable side information, such as the textual descriptions and the categorical data of items. In this paper, we propose the Variational Bayesian Context-Aware Representation (VBCAR) model for grocery recommendation. VBCAR is a novel variational Bayesian model that learns distributional representations of users and items by leveraging basket context information from historical interactions. Our VBCAR model is also extendable to leverage side information by encoding contextual features into representations based on the inference encoder. We conduct extensive experiments on three real-world grocery datasets to assess the effectiveness of our model as well as the impact of different construction strategies for item side information. Our results show that our VBCAR model outperforms the current state-of-the-art grocery recommendation models while integrating item side information (especially the categorical features with the textual information of items) results in further significant performance gains. Furthermore, we demonstrate through analysis that our model is able to effectively encode similarities between product types, which we argue is the primary reason for the observed effectiveness gains.


Sign in / Sign up

Export Citation Format

Share Document