Specifying Constraints for Detecting Inconsistencies in A Conceptual Graph Knowledge Base

Author(s):  
Caralee Kassos ◽  
Harry Delugach

This paper proposes a strategy for representing constraints in a conceptual graph knowledge base. We describe a set of techniques for using these constraints to detect inconsistencies in a knowledge base by finding sets of nodes that are inconsistent with these constraints. The detection method is designed to be efficient. An algorithm was developed and analyzed and its computational complexity was found to be polynomial with respect to knowledge base size and number of child nodes for each constraint node.

Description logic gives us the ability of reasoning with acceptable computational complexity with retaining the power of expressiveness. The power of description logic can be accompanied by the defeasible logic to manage non-monotonic reasoning. In some domains, we need flexible reasoning and knowledge representation to deal the dynamicity of such domains. In this paper, we present a DL representation for a small domain that describes the connections between different entities in a university publication system to show how could we deal with changeability in domain rules. An automated support can be provided on the basis of defeasible logical rules to represent the typicality in the knowledge base and to solve the conflicts that might happen.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yildiz Aydin ◽  
Bekir Dizdaroğlu

Degradations frequently occur in archive films that symbolize the historical and cultural heritage of a nation. In this study, the problem of detection blotches commonly encountered in archive films is handled. Here, a block-based blotch detection method is proposed based on a visual saliency map. The visual saliency map reveals prominent areas in an input frame and thus enables more accurate results in the blotch detection. A simple and effective visual saliency map method is taken into consideration in order to reduce computational complexity for the detection phase. After the visual saliency maps of the given frames are obtained, blotch regions are estimated by considered spatiotemporal patches—without the requirement for motion estimation—around the saliency pixels, which are subjected to a prethresholding process. Experimental results show that the proposed block-based blotch detection method provides a significant advantage with reducing false alarm rates over HOG feature (Yous and Serir, 2017), LBP feature (Yous and Serir, 2017), and regions-matching (Yous and Serir, 2016) methods presented in recent years.


Author(s):  
Zili Zhou ◽  
Shaowu Liu ◽  
Guandong Xu ◽  
Wu Zhang

Multi-relation embedding is a popular approach to knowledge base completion that learns embedding representations of entities and relations to compute the plausibility of missing triplet. The effectiveness of embedding approach depends on the sparsity of KB and falls for infrequent entities that only appeared a few times. This paper addresses this issue by proposing a new model exploiting the entity-independent transitive relation patterns, namely Transitive Relation Embedding (TRE). The TRE model alleviates the sparsity problem for predicting on infrequent entities while enjoys the generalisation power of embedding. Experiments on three public datasets against seven baselines showed the merits of TRE in terms of knowledge base completion accuracy as well as computational complexity.


Author(s):  
RICARDO GONÇALVES ◽  
MATTHIAS KNORR ◽  
JOÃO LEITE

Abstract Forgetting – or variable elimination – is an operation that allows the removal, from a knowledge base, of middle variables no longer deemed relevant. In recent years, many different approaches for forgetting in Answer Set Programming have been proposed, in the form of specific operators, or classes of such operators, commonly following different principles and obeying different properties. Each such approach was developed to address some particular view on forgetting, aimed at obeying a specific set of properties deemed desirable in such view, but a comprehensive and uniform overview of all the existing operators and properties is missing. In this article, we thoroughly examine existing properties and (classes of) operators for forgetting in Answer Set Programming, drawing a complete picture of the landscape of these classes of forgetting operators, which includes many novel results on relations between properties and operators, including considerations on concrete operators to compute results of forgetting and computational complexity. Our goal is to provide guidance to help users in choosing the operator most adequate for their application requirements.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jianxiong Pan ◽  
Neng Ye ◽  
Aihua Wang ◽  
Xiangming Li

The rapid booming of future smart city applications and Internet of things (IoT) has raised higher demands on the next-generation radio access technologies with respect to connection density, spectral efficiency (SE), transmission accuracy, and detection latency. Recently, faster-than-Nyquist (FTN) and nonorthogonal multiple access (NOMA) have been regarded as promising technologies to achieve higher SE and massive connections, respectively. In this paper, we aim to exploit the joint benefits of FTN and NOMA by superimposing multiple FTN-based transmission signals on the same physical recourses. Considering the complicated intra- and interuser interferences introduced by the proposed transmission scheme, the conventional detection methods suffer from high computational complexity. To this end, we develop a novel sliding-window detection method by incorporating the state-of-the-art deep learning (DL) technology. The data-driven offline training is first applied to derive a near-optimal receiver for FTN-based NOMA, which is deployed online to achieve high detection accuracy as well as low latency. Monte Carlo simulation results validate that the proposed detector achieves higher detection accuracy than minimum mean squared error-frequency domain equalization (MMSE-FDE) and can even approach the performance of the maximum likelihood-based receiver with greatly reduced computational complexity, which is suitable for IoT applications in smart city with low latency and high reliability requirements.


2003 ◽  
Vol 3 (6) ◽  
pp. 671-715 ◽  
Author(s):  
CHIAKI SAKAMA ◽  
KATSUMI INOUE

This paper introduces an abductive framework for updating knowledge bases represented by extended disjunctive programs. We first provide a simple transformation from abductive programs to update programs which are logic programs specifying changes on abductive hypotheses. Then, extended abduction, which was introduced by the same authors as a generalization of traditional abduction, is computed by the answer sets of update programs. Next, different types of updates, view updates and theory updates are characterized by abductive programs and computed by update programs. The task of consistency restoration is also realized as special cases of these updates. Each update problem is comparatively assessed from the computational complexity viewpoint. The result of this paper provides a uniform framework for different types of knowledge base updates, and each update is computed using existing procedures of logic programming.


2005 ◽  
Vol 95 (5) ◽  
pp. 1355-1368 ◽  
Author(s):  
Enriqueta Aragones ◽  
Itzhak Gilboa ◽  
Andrew Postlewaite ◽  
David Schmeidler

People may be surprised to notice certain regularities that hold in existing knowledge they have had for some time. That is, they may learn without getting new factual information. We argue that this can be partly explained by computational complexity. We show that, given a knowledge base, finding a small set of variables that obtain a certain value of R2 is computationally hard, in the sense that this term is used in computer science. We discuss some of the implications of this result and of fact-free learning in general.


Author(s):  
Ning Wang ◽  
Xiangran Ren

Unlike tables in relational database, web tables have no designated key attributes or entity columns, so it is difficult for computers to understand a table and associate with it a concept in the knowledge taxonomy. Existing techniques for entity column detection can only process tables with single entity column, discarding tables which describe multiple concepts. In this paper, we propose a framework for identifying multiple entity columns in a web table. At first, we annotate column labels for a web table with missing or noninformative labels based on external knowledge base Probase. By detecting concept-attribute relationships between table columns and calculating the credibility of attribute dependency, we construct a column dependency view for the table. Then, the column semantic intensity is calculated for each column in a web table, which depends on its connectivity in column dependency view and the dependency credibility of attribute dependency relationships related to it. We can identify all entity columns from the web table by iteratively selecting primary entity column with the highest column semantic intensity and accordingly separate columns describing the primary concept from present column dependency view. The results of a comprehensive set of experiments indicate that our entity detection method is more effective than existing methods for either single or multiple concept tables.


2020 ◽  
Vol 49 (4) ◽  
pp. 622-642
Author(s):  
Samuel Chen ◽  
Shengyi Xie ◽  
Qingqiang Chen

To tackle specific problems in knowledge base completion such as computational complexity and complex relations or nodes with high indegree or outdegree, an algorithm called IEAKBC(short for Integrated Embedding Approach for Knowledge Base Completion) is proposed, in which entities and relations from triplets are first mapped into low-dimensional vector spaces, each original triplet represented in the form of 3-column, k dimensional matrix; then features from different relations are integrated into head and tail entities thus forming fused triplet matrices used as another input channel for convolution. In CNN feature maps are extracted by filters, concatenated and weighted for output scores to discern whether the original triplet holds or not. Experiments show that IEAKBC holds certain advantages over other models; when scaling up to relatively larger datasets, signs of superiority of IEAKBC stand out especially on relations with high cardinalities. At last we apply IEAKBC to a personalized search application, comparing its performance with strong baselines to verify its practicality in real environments.     


Author(s):  
Huaxia Wang ◽  
Yongmei Cheng ◽  
Nan Liu ◽  
Shun Yao ◽  
Juanjuan Ma ◽  
...  

Under fixed imaging conditions, the landmark selection method based feature traversal analysis has high computational complexity. The hierarchical statistical significance detection method uses global statistical information for feature analysis to overcome the computational complexity problem caused by feature traversal analysis. The frequency domain de-correlation method can remove repeat mode in the image by adaptive Gaussian filtering on the amplitude-frequency characteristics. In this paper, combined the hierarchical statistical saliency detection method with the frequency domain de-correlation method, a fast landmark selection algorithm based on saliency analysis is proposed. Based on the proposed algorithm, the automatic landmark selection architecture for terrain matching navigation was constructed. The selection of landmark points was carried out in the Qin-ling Mountains and the Guangdong and Guangxi hills. The results show that compared with the feature or pixel-based landmark selection method, the landmark selection efficiency of the proposed method is improved by 2 to 3 orders of magnitude. The correct matching rate of candidate landmarks selected in the Qinling Mountains and the Guangdong and Guangxi hills are 73.9% and 88.3% respectively.


Sign in / Sign up

Export Citation Format

Share Document