scholarly journals On Completing Sparse Knowledge Base with Transitive Relation Embedding

Author(s):  
Zili Zhou ◽  
Shaowu Liu ◽  
Guandong Xu ◽  
Wu Zhang

Multi-relation embedding is a popular approach to knowledge base completion that learns embedding representations of entities and relations to compute the plausibility of missing triplet. The effectiveness of embedding approach depends on the sparsity of KB and falls for infrequent entities that only appeared a few times. This paper addresses this issue by proposing a new model exploiting the entity-independent transitive relation patterns, namely Transitive Relation Embedding (TRE). The TRE model alleviates the sparsity problem for predicting on infrequent entities while enjoys the generalisation power of embedding. Experiments on three public datasets against seven baselines showed the merits of TRE in terms of knowledge base completion accuracy as well as computational complexity.

Description logic gives us the ability of reasoning with acceptable computational complexity with retaining the power of expressiveness. The power of description logic can be accompanied by the defeasible logic to manage non-monotonic reasoning. In some domains, we need flexible reasoning and knowledge representation to deal the dynamicity of such domains. In this paper, we present a DL representation for a small domain that describes the connections between different entities in a university publication system to show how could we deal with changeability in domain rules. An automated support can be provided on the basis of defeasible logical rules to represent the typicality in the knowledge base and to solve the conflicts that might happen.


Author(s):  
RICARDO GONÇALVES ◽  
MATTHIAS KNORR ◽  
JOÃO LEITE

Abstract Forgetting – or variable elimination – is an operation that allows the removal, from a knowledge base, of middle variables no longer deemed relevant. In recent years, many different approaches for forgetting in Answer Set Programming have been proposed, in the form of specific operators, or classes of such operators, commonly following different principles and obeying different properties. Each such approach was developed to address some particular view on forgetting, aimed at obeying a specific set of properties deemed desirable in such view, but a comprehensive and uniform overview of all the existing operators and properties is missing. In this article, we thoroughly examine existing properties and (classes of) operators for forgetting in Answer Set Programming, drawing a complete picture of the landscape of these classes of forgetting operators, which includes many novel results on relations between properties and operators, including considerations on concrete operators to compute results of forgetting and computational complexity. Our goal is to provide guidance to help users in choosing the operator most adequate for their application requirements.


2003 ◽  
Vol 3 (6) ◽  
pp. 671-715 ◽  
Author(s):  
CHIAKI SAKAMA ◽  
KATSUMI INOUE

This paper introduces an abductive framework for updating knowledge bases represented by extended disjunctive programs. We first provide a simple transformation from abductive programs to update programs which are logic programs specifying changes on abductive hypotheses. Then, extended abduction, which was introduced by the same authors as a generalization of traditional abduction, is computed by the answer sets of update programs. Next, different types of updates, view updates and theory updates are characterized by abductive programs and computed by update programs. The task of consistency restoration is also realized as special cases of these updates. Each update problem is comparatively assessed from the computational complexity viewpoint. The result of this paper provides a uniform framework for different types of knowledge base updates, and each update is computed using existing procedures of logic programming.


2005 ◽  
Vol 95 (5) ◽  
pp. 1355-1368 ◽  
Author(s):  
Enriqueta Aragones ◽  
Itzhak Gilboa ◽  
Andrew Postlewaite ◽  
David Schmeidler

People may be surprised to notice certain regularities that hold in existing knowledge they have had for some time. That is, they may learn without getting new factual information. We argue that this can be partly explained by computational complexity. We show that, given a knowledge base, finding a small set of variables that obtain a certain value of R2 is computationally hard, in the sense that this term is used in computer science. We discuss some of the implications of this result and of fact-free learning in general.


2021 ◽  
pp. 1-17
Author(s):  
Luping Liu ◽  
Meiling Wang ◽  
Xiaohai He ◽  
Linbo Qing ◽  
Jin Zhang

Joint extraction of entities and relations from unstructured text is an essential step in constructing a knowledge base. However, relational facts in these texts are often complicated, where most of them contain overlapping triplets, making the joint extraction task still challenging. This paper proposes a novel Sequence-to-Sequence (Seq2Seq) framework to handle the overlapping issue, which models the triplet extraction as a sequence generation task. Specifically, a unique cascade structure is proposed to connect transformer and pointer network to extract entities and relations jointly. By this means, sequences can be generated in triplet-level and it speeds up the decoding process. Besides, a syntax-guided encoder is applied to integrate the sentence’s syntax structure into the transformer encoder explicitly, which helps the encoder pay more accurate attention to the syntax-related words. Extensive experiments were conducted on three public datasets, named NYT24, NYT29, and WebNLG, and the results show the validity of this model by comparing with various baselines. In addition, a pre-trained BERT model is also employed as the encoder. Then it comes up to excellent performance that the F1 scores on the three datasets surpass the strongest baseline by 5.7%, 5.6%, and 4.4%.


Author(s):  
Lin Chen ◽  
Lei Xu ◽  
Shouhuai Xu ◽  
Zhimin Gao ◽  
Weidong Shi

We consider the electoral bribery problem in computational social choice. In this context, extensive studies have been carried out to analyze the computational vulnerability of various voting (or election) rules. However, essentially all prior studies assume a deterministic model where each voter has an associated threshold value, which is used as follows. A voter will take a bribe and vote according to the attacker's (i.e., briber's) preference when the amount of the bribe is above the threshold, and a voter will not take a bribe when the amount of the bribe is not above the threshold (in this case, the voter will vote according to its own preference, rather than the attacker's). In this paper, we initiate the study of a more realistic model where each voter is associated with a  willingness function, rather than a fixed threshold value. The willingness function characterizes the  likelihood a bribed voter would vote according to the attacker's preference; we call this bribe-effect uncertainty. We characterize the computational complexity of the electoral bribery problem in this new model. In particular, we discover a dichotomy result: a certain mathematical property of the willingness function dictates whether or not the computational hardness can serve as a deterrence to bribery attackers.


2020 ◽  
Vol 49 (4) ◽  
pp. 622-642
Author(s):  
Samuel Chen ◽  
Shengyi Xie ◽  
Qingqiang Chen

To tackle specific problems in knowledge base completion such as computational complexity and complex relations or nodes with high indegree or outdegree, an algorithm called IEAKBC(short for Integrated Embedding Approach for Knowledge Base Completion) is proposed, in which entities and relations from triplets are first mapped into low-dimensional vector spaces, each original triplet represented in the form of 3-column, k dimensional matrix; then features from different relations are integrated into head and tail entities thus forming fused triplet matrices used as another input channel for convolution. In CNN feature maps are extracted by filters, concatenated and weighted for output scores to discern whether the original triplet holds or not. Experiments show that IEAKBC holds certain advantages over other models; when scaling up to relatively larger datasets, signs of superiority of IEAKBC stand out especially on relations with high cardinalities. At last we apply IEAKBC to a personalized search application, comparing its performance with strong baselines to verify its practicality in real environments.     


2021 ◽  
Author(s):  
Ghassen Hamdi ◽  
Mohamed Nazih Omri

The lightweight description logic (DL-lite) represents one of the most important logic specially dedicated to applications that handle large volumes of data. Managing inconsistency issues, in order to effectively query inconsistent DL-Lite knowledge bases, is a topical issue. Since assertions (ABoxes) come from a variety of sources with varying degrees of reliability, there is confusion in hierarchical knowledge bases. As a consequence, the inclusion of new axioms is a main factor that causes inconsistency in this type of knowledge base. Often, it is too expensive to manually verify and validate all assertions. In this article, we study the problem of inconsistencies in the DL-Lite family and we propose a new algorithm to resolve the inconsistencies in prioritized knowledge bases. We carried out an experimental study to analyze and compare the results obtained by our proposed algorithm, in the framework of this work, and the main algorithms studied in the literature. The results obtained show that our algorithm is more productive than the others, compared to standard performance measures, namely precision, recall and F-measure.


Author(s):  
Caralee Kassos ◽  
Harry Delugach

This paper proposes a strategy for representing constraints in a conceptual graph knowledge base. We describe a set of techniques for using these constraints to detect inconsistencies in a knowledge base by finding sets of nodes that are inconsistent with these constraints. The detection method is designed to be efficient. An algorithm was developed and analyzed and its computational complexity was found to be polynomial with respect to knowledge base size and number of child nodes for each constraint node.


2018 ◽  
Vol 173 ◽  
pp. 03016 ◽  
Author(s):  
Jia Li ◽  
YongJian Yang

To address rating sparsity problem, various review-based recommender systems have been developed in recent years. Most of them extract topics, opinions, and emotional polarity from the reviews by using the techniques of text analysis and opinion mining. According to existing researches, review-based recommendation methods utilize review elements in rating prediction model, but underuse the actual ratings provided by users. In this paper, we adopt one lexicon-based opinion mining method to extract opinions hidden in reviews, and also, we combine opinions with actual ratings. In addition, we embed deep neural networks model which breaks through the limitation of traditional collaborative filtering. The experimental results based on two public datasets indicate that this personalized model provides an effective recommendation performance.


Sign in / Sign up

Export Citation Format

Share Document