scholarly journals Cautious Rule-Based Collective Inference

Author(s):  
Martin Svatos

Collective inference is a popular approach for solving tasks as knowledge graph completion within the statistical relational learning field. There are many existing solutions for this task, however, each of them is subjected to some limitation, either by restriction to only some learning settings, lacking interpretability of the model or theoretical test error bounds. We propose an approach based on cautious inference process which uses first-order rules and provides PAC-style bounds.

AI Magazine ◽  
2015 ◽  
Vol 36 (1) ◽  
pp. 65-74 ◽  
Author(s):  
Jay Pujara ◽  
Hui Miao ◽  
Lise Getoor ◽  
William W. Cohen

Many information extraction and knowledge base construction systems are addressing the challenge of deriving knowledge from text. A key problem in constructing these knowledge bases from sources like the web is overcoming the erroneous and incomplete information found in millions of candidate extractions. To solve this problem, we turn to semantics — using ontological constraints between candidate facts to eliminate errors. In this article, we represent the desired knowledge base as a knowledge graph and introduce the problem of knowledge graph identification, collectively resolving the entities, labels, and relations present in the knowledge graph. Knowledge graph identification requires reasoning jointly over millions of extractions simultaneously, posing a scalability challenge to many approaches. We use probabilistic soft logic (PSL), a recently-introduced statistical relational learning framework, to implement an efficient solution to knowledge graph identification and present state-of-the-art results for knowledge graph construction while performing an order of magnitude faster than competing methods.


2020 ◽  
Vol 34 (06) ◽  
pp. 10259-10266
Author(s):  
Sriram Srinivasan ◽  
Eriq Augustine ◽  
Lise Getoor

Statistical relational learning (SRL) frameworks allow users to create large, complex graphical models using a compact, rule-based representation. However, these models can quickly become prohibitively large and not fit into machine memory. In this work we address this issue by introducing a novel technique called tandem inference (ti). The primary idea of ti is to combine grounding and inference such that both processes happen in tandem. ti uses an out-of-core streaming approach to overcome memory limitations. Even when memory is not an issue, we show that our proposed approach is able to do inference faster while using less memory than existing approaches. To show the effectiveness of ti, we use a popular SRL framework called Probabilistic Soft Logic (PSL). We implement ti for PSL by proposing a gradient-based inference engine and a streaming approach to grounding. We show that we are able to run an SRL model with over 1B cliques in under nine hours and using only 10 GB of RAM; previous approaches required more than 800 GB for this model and are infeasible on common hardware. To the best of our knowledge, this is the largest SRL model ever run.


2021 ◽  
Author(s):  
Caina Figueiredo ◽  
Joao Gabriel Lopes ◽  
Rodrigo Azevedo ◽  
Gerson Zaverucha ◽  
Daniel Sadoc Menasche ◽  
...  

2020 ◽  
Vol 34 (10) ◽  
pp. 13935-13936
Author(s):  
Tato Ange ◽  
Nkambou Roger

This paper presents a simple and intuitive technique to accelerate the convergence of first-order optimization algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient and the previous step taken during training. Results after tests show that the technique has the potential to significantly improve the performance of existing first-order optimization algorithms.


Author(s):  
Arti Shivram ◽  
Tushar Khot ◽  
Sriraam Natarajan ◽  
Venu Govindaraju

Sign in / Sign up

Export Citation Format

Share Document