statistical relational learning
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 15)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Sriram Srinivasan ◽  
Charles Dickens ◽  
Eriq Augustine ◽  
Golnoosh Farnadi ◽  
Lise Getoor

AbstractStatistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust.


2021 ◽  
Author(s):  
Ling Li ◽  
Weibang Li ◽  
Lidong Zhu ◽  
Chengjie Li ◽  
Zhen Zhang

2021 ◽  
Author(s):  
Caina Figueiredo ◽  
Joao Gabriel Lopes ◽  
Rodrigo Azevedo ◽  
Gerson Zaverucha ◽  
Daniel Sadoc Menasche ◽  
...  

2021 ◽  
Author(s):  
Varun Embar ◽  
Sriram Srinivasan ◽  
Lise Getoor

AbstractStatistical relational learning (SRL) and graph neural networks (GNNs) are two powerful approaches for learning and inference over graphs. Typically, they are evaluated in terms of simple metrics such as accuracy over individual node labels. Complex aggregate graph queries (AGQ) involving multiple nodes, edges, and labels are common in the graph mining community and are used to estimate important network properties such as social cohesion and influence. While graph mining algorithms support AGQs, they typically do not take into account uncertainty, or when they do, make simplifying assumptions and do not build full probabilistic models. In this paper, we examine the performance of SRL and GNNs on AGQs over graphs with partially observed node labels. We show that, not surprisingly, inferring the unobserved node labels as a first step and then evaluating the queries on the fully observed graph can lead to sub-optimal estimates, and that a better approach is to compute these queries as an expectation under the joint distribution. We propose a sampling framework to tractably compute the expected values of AGQs. Motivated by the analysis of subgroup cohesion in social networks, we propose a suite of AGQs that estimate the community structure in graphs. In our empirical evaluation, we show that by estimating these queries as an expectation, SRL-based approaches yield up to a 50-fold reduction in average error when compared to existing GNN-based approaches.


2020 ◽  
Vol 34 (06) ◽  
pp. 10259-10266
Author(s):  
Sriram Srinivasan ◽  
Eriq Augustine ◽  
Lise Getoor

Statistical relational learning (SRL) frameworks allow users to create large, complex graphical models using a compact, rule-based representation. However, these models can quickly become prohibitively large and not fit into machine memory. In this work we address this issue by introducing a novel technique called tandem inference (ti). The primary idea of ti is to combine grounding and inference such that both processes happen in tandem. ti uses an out-of-core streaming approach to overcome memory limitations. Even when memory is not an issue, we show that our proposed approach is able to do inference faster while using less memory than existing approaches. To show the effectiveness of ti, we use a popular SRL framework called Probabilistic Soft Logic (PSL). We implement ti for PSL by proposing a gradient-based inference engine and a streaming approach to grounding. We show that we are able to run an SRL model with over 1B cliques in under nine hours and using only 10 GB of RAM; previous approaches required more than 800 GB for this model and are infeasible on common hardware. To the best of our knowledge, this is the largest SRL model ever run.


2020 ◽  
pp. 688-707
Author(s):  
Lediona Nishani ◽  
Marenglen Biba

People nowadays base their behavior by making choices through word of mouth, media, public opinion, surveys, etc. One of the most prominent techniques of recommender systems is Collaborative filtering (CF), which utilizes the known preferences of several users to develop recommendation for other users. CF can introduce limitations like new-item problem, new-user problem or data sparsity, which can be mitigated by employing Statistical Relational Learning (SRLs). This review chapter presents a comprehensive scientific survey from the basic and traditional techniques to the-state-of-the-art of SRL algorithms implemented for collaborative filtering issues. Authors provide a comprehensive review of SRL for CF tasks and demonstrate strong evidence that SRL can be successfully implemented in the recommender systems domain. Finally, the chapter is concluded with a summarization of the key issues that SRLs tackle in the collaborative filtering area and suggest further open issues in order to advance in this field of research.


Author(s):  
Martin Svatos

Collective inference is a popular approach for solving tasks as knowledge graph completion within the statistical relational learning field. There are many existing solutions for this task, however, each of them is subjected to some limitation, either by restriction to only some learning settings, lacking interpretability of the model or theoretical test error bounds. We propose an approach based on cautious inference process which uses first-order rules and provides PAC-style bounds.


Author(s):  
Sebastijan Dumancic ◽  
Alberto Garcia-Duran ◽  
Mathias Niepert

Many real-world domains can be expressed as graphs and, more generally, as multi-relational knowledge graphs. Though reasoning and learning with knowledge graphs has traditionally been addressed by symbolic approaches such as Statistical relational learning, recent methods in (deep) representation learning have shown promising results for specialised tasks such as knowledge base completion. These approaches, also known as distributional, abandon the traditional symbolic paradigm by replacing symbols with vectors in Euclidean space. With few exceptions, symbolic and distributional approaches are explored in different communities and little is known about their respective strengths and weaknesses. In this work, we compare distributional and symbolic relational learning approaches on various standard relational classification and knowledge base completion tasks. Furthermore, we analyse the properties of the datasets and relate them to the performance of the methods in the comparison. The results reveal possible indicators that could help in choosing one approach over the other for particular knowledge graphs.


Sign in / Sign up

Export Citation Format

Share Document