markov logic
Recently Published Documents


TOTAL DOCUMENTS

241
(FIVE YEARS 33)

H-INDEX

22
(FIVE YEARS 3)

2022 ◽  
pp. 108158
Author(s):  
Zhimin Zhang ◽  
Tao Zhu ◽  
Dazhi Gao ◽  
Jiabo Xu ◽  
Hong Liu ◽  
...  

2021 ◽  
Author(s):  
Michelangelo Diligenti ◽  
Francesco Giannini ◽  
Marco Gori ◽  
Marco Maggini ◽  
Giuseppe Marra

Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which have significant limitations. Sub-symbolic approaches, like neural networks, require a large amount of labeled data to be successful, whereas symbolic approaches, like logic reasoners, require a small amount of prior domain knowledge but do not easily scale to large collections of data. This chapter presents a general approach to integrate learning and reasoning that is based on the translation of the available prior knowledge into an undirected graphical model. Potentials on the graphical model are designed to accommodate dependencies among random variables by means of a set of trainable functions, like those computed by neural networks. The resulting neural-symbolic framework can effectively leverage the training data, when available, while exploiting high-level logic reasoning in a certain domain of discourse. Although exact inference is intractable within this model, different tractable models can be derived by making different assumptions. In particular, three models are presented in this chapter: Semantic-Based Regularization, Deep Logic Models and Relational Neural Machines. Semantic-Based Regularization is a scalable neural-symbolic model, that does not adapt the parameters of the reasoner, under the assumption that the provided prior knowledge is correct and must be exactly satisfied. Deep Logic Models preserve the scalability of Semantic-Based Regularization, while providing a flexible exploitation of logic knowledge by co-training the parameters of the reasoner during the learning procedure. Finally, Relational Neural Machines provide the fundamental advantages of perfectly replicating the effectiveness of training from supervised data of standard deep architectures, and of preserving the same generality and expressive power of Markov Logic Networks, when considering pure reasoning on symbolic data. The bonding between learning and reasoning is very general as any (deep) learner can be adopted, and any output structure expressed via First-Order Logic can be integrated. However, exact inference within a Relational Neural Machine is still intractable, and different factorizations are discussed to increase the scalability of the approach.


Author(s):  
Khan Mohammad Al Farabi ◽  
Somdeb Sarkhel ◽  
Sanorita Dey ◽  
Deepak Venugopal

2021 ◽  
pp. 307-344
Author(s):  
Magy Seif El-Nasr ◽  
Truong Huy Nguyen Dinh ◽  
Alessandro Canossa ◽  
Anders Drachen

This chapter discusses more advanced methods for sequence analysis. These include: probabilistic methods using classical planning, Bayesian Networks (BN), Dynamic Bayesian Networks (DBNs), Hidden Markov Models (HMMs), Markov Logic Networks (MLNs), Markov Decision Process (MDP), and Recurrent Neural Networks (RNNs), specifically concentrating on LSTM (Long Short-Term Memory). These techniques are all great but, at this time, are mostly used in academia and less in the industry. Thus, the chapter takes a more academic approach, showing the work and its application to games when possible. The techniques are important as they cultivate future directions of how you can think about modeling, predicting players’ strategies, actions, and churn. We believe these methods can be leveraged in the future as the field advances and will have an impact in the industry. Please note that this chapter was developed in collaboration with several PhD students at Northeastern University, specifically Nathan Partlan, Madkour Abdelrahman Amr, and Sabbir Ahmad, who contributed greatly to this chapter and the case studies discussed.


Author(s):  
NIKOS KATZOURIS ◽  
GEORGIOS PALIOURAS ◽  
ALEXANDER ARTIKIS

Abstract Complex Event Recognition (CER) systems detect event occurrences in streaming time-stamped input using predefined event patterns. Logic-based approaches are of special interest in CER, since, via Statistical Relational AI, they combine uncertainty-resilient reasoning with time and change, with machine learning, thus alleviating the cost of manual event pattern authoring. We present a system based on Answer Set Programming (ASP), capable of probabilistic reasoning with complex event patterns in the form of weighted rules in the Event Calculus, whose structure and weights are learnt online. We compare our ASP-based implementation with a Markov Logic-based one and with a number of state-of-the-art batch learning algorithms on CER data sets for activity recognition, maritime surveillance and fleet management. Our results demonstrate the superiority of our novel approach, both in terms of efficiency and predictive performance. This paper is under consideration for publication in Theory and Practice of Logic Programming (TPLP).


2021 ◽  
Author(s):  
Arnaud Nguembang Fadja ◽  
Fabrizio Riguzzi ◽  
Evelina Lamma

AbstractProbabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shanshan Wang ◽  
Jiahui Xu ◽  
Youli Feng ◽  
Meiling Peng ◽  
Kaijie Ma

Purpose This study aims to overcome the problem of traditional association rules relying almost entirely on expert experience to set relevant interest indexes in mining. Second, this project can effectively solve the problem of four types of rules being present in the database at the same time. The traditional association algorithm can only mine one or two types of rules and cannot fully explore the database knowledge in the decision-making process for library recommendation. Design/methodology/approach The authors proposed a Markov logic network method to reconstruct association rule-mining tasks for library recommendation and compared the method proposed in this paper to traditional Apriori, FP-Growth, Inverse, Sporadic and UserBasedCF algorithms on two history library data sets and the Chess and Accident data sets. Findings The method used in this project had two major advantages. First, the authors were able to mine four types of rules in an integrated manner without having to set interest measures. In addition, because it represents the relevance of mining in the network, decision-makers can use network visualization tools to fully understand the results of mining in library recommendation and data sets from other fields. Research limitations/implications The time cost of the project is still high for large data sets. The authors will solve this problem by mapping books, items, or attributes to higher granularity to reduce the computational complexity in the future. Originality/value The authors believed that knowledge of complex real-world problems can be well captured from a network perspective. This study can help researchers to avoid setting interest metrics and to comprehensively extract frequent, rare, positive, and negative rules in an integrated manner.


Information ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 124
Author(s):  
Ping Zhong ◽  
Zhanhuai Li ◽  
Qun Chen ◽  
Boyi Hou ◽  
Murtadha Ahmed

In recent years, the Markov Logic Network (MLN) has emerged as a powerful tool for knowledge-based inference due to its ability to combine first-order logic inference and probabilistic reasoning. Unfortunately, current MLN solutions cannot efficiently support knowledge inference involving arithmetic expressions, which is required to model the interaction between logic relations and numerical values in many real applications. In this paper, we propose a probabilistic inference framework, called the Numerical Markov Logic Network (NMLN), to enable efficient inference of hybrid knowledge involving both logic and arithmetic expressions. We first introduce the hybrid knowledge rules, then define an inference model, and finally, present a technique based on convex optimization for efficient inference. Built on decomposable exp-loss function, the proposed inference model can process hybrid knowledge rules more effectively and efficiently than the existing MLN approaches. Finally, we empirically evaluate the performance of the proposed approach on real data. Our experiments show that compared to the state-of-the-art MLN solution, it can achieve better prediction accuracy while significantly reducing inference time.


Sign in / Sign up

Export Citation Format

Share Document