scholarly journals Knowledge Representation of Highly Dynamic Ontologies Using Defeasible Logic

Description logic gives us the ability of reasoning with acceptable computational complexity with retaining the power of expressiveness. The power of description logic can be accompanied by the defeasible logic to manage non-monotonic reasoning. In some domains, we need flexible reasoning and knowledge representation to deal the dynamicity of such domains. In this paper, we present a DL representation for a small domain that describes the connections between different entities in a university publication system to show how could we deal with changeability in domain rules. An automated support can be provided on the basis of defeasible logical rules to represent the typicality in the knowledge base and to solve the conflicts that might happen.

Author(s):  
Erman Acar ◽  
Rafael Peñaloza

Influence diagrams (IDs) are well-known formalisms, which extend Bayesian networks to model decision situations under uncertainty. Although they are convenient as a decision theoretic tool, their knowledge representation ability is limited in capturing other crucial notions such as logical consistency. In this article, we complement IDs with the light-weight description logic (DL) EL to overcome such limitations. We consider a setup where DL axioms hold in some contexts, yet the actual context is uncertain. The framework benefits from the convenience of using DL as a domain knowledge representation language and the modelling strength of IDs to deal with decisions over contexts in the presence of contextual uncertainty. We define related reasoning problems and study their computational complexity.


1999 ◽  
Vol 11 ◽  
pp. 199-240 ◽  
Author(s):  
D. Calvanese ◽  
M. Lenzerini ◽  
D. Nardi

The notion of class is ubiquitous in computer science and is central in many formalisms for the representation of structured knowledge used both in knowledge representation and in databases. In this paper we study the basic issues underlying such representation formalisms and single out both their common characteristics and their distinguishing features. Such investigation leads us to propose a unifying framework in which we are able to capture the fundamental aspects of several representation languages used in different contexts. The proposed formalism is expressed in the style of description logics, which have been introduced in knowledge representation as a means to provide a semantically well-founded basis for the structural aspects of knowledge representation systems. The description logic considered in this paper is a subset of first order logic with nice computational characteristics. It is quite expressive and features a novel combination of constructs that has not been studied before. The distinguishing constructs are number restrictions, which generalize existence and functional dependencies, inverse roles, which allow one to refer to the inverse of a relationship, and possibly cyclic assertions, which are necessary for capturing real world domains. We are able to show that it is precisely such combination of constructs that makes our logic powerful enough to model the essential set of features for defining class structures that are common to frame systems, object-oriented database languages, and semantic data models. As a consequence of the established correspondences, several significant extensions of each of the above formalisms become available. The high expressiveness of the logic we propose and the need for capturing the reasoning in different contexts forces us to distinguish between unrestricted and finite model reasoning. A notable feature of our proposal is that reasoning in both cases is decidable. We argue that, by virtue of the high expressive power and of the associated reasoning capabilities on both unrestricted and finite models, our logic provides a common core for class-based representation formalisms.


Author(s):  
Christopher Walton

In the introductory chapter of this book, we discussed the means by which knowledge can be made available on the Web. That is, the representation of the knowledge in a form by which it can be automatically processed by a computer. To recap, we identified two essential steps that were deemed necessary to achieve this task: 1. We discussed the need to agree on a suitable structure for the knowledge that we wish to represent. This is achieved through the construction of a semantic network, which defines the main concepts of the knowledge, and the relationships between these concepts. We presented an example network that contained the main concepts to differentiate between kinds of cameras. Our network is a conceptualization, or an abstract view of a small part of the world. A conceptualization is defined formally in an ontology, which is in essence a vocabulary for knowledge representation. 2. We discussed the construction of a knowledge base, which is a store of knowledge about a domain in machine-processable form; essentially a database of knowledge. A knowledge base is constructed through the classification of a body of information according to an ontology. The result will be a store of facts and rules that describe the domain. Our example described the classification of different camera features to form a knowledge base. The knowledge base is expressed formally in the language of the ontology over which it is defined. In this chapter we elaborate on these two steps to show how we can define ontologies and knowledge bases specifically for the Web. This will enable us to construct Semantic Web applications that make use of this knowledge. The chapter is devoted to a detailed explanation of the syntax and pragmatics of the RDF, RDFS, and OWL Semantic Web standards. The resource description framework (RDF) is an established standard for knowledge representation on the Web. Taken together with the associated RDF Schema (RDFS) standard, we have a language for representing simple ontologies and knowledge bases on the Web.


2011 ◽  
Vol 2 (4) ◽  
pp. 19-33 ◽  
Author(s):  
Christian Jung ◽  
Manuel Rudolph ◽  
Reinhard Schwarz

The Service-Oriented Architecture paradigm (SOA) is commonly applied for the implementation of complex, distributed business processes. The service-oriented approach promises higher flexibility, interoperability and reusability of the IT infrastructure. However, evaluating the quality attribute security of such complex SOA configurations is not sufficiently mastered yet. To tackle this complex problem, the authors developed a method for evaluating the security of existing service-oriented systems on the architectural level. The method is based on recovering security-relevant facts about the system by using reverse engineering techniques and subsequently providing automated support for further interactive security analysis at the structural level. By using generic, system-independent indicators and a knowledge base, the method is not limited to a specific programming language or technology. Therefore, the method can be applied to various systems and adapt it to specific evaluation needs. The paper describes the general structure of the method, the knowledge base, and presents an instantiation aligned to the Service Component Architecture (SCA) specification.


1994 ◽  
Vol 03 (03) ◽  
pp. 319-348 ◽  
Author(s):  
CHITTA BARAL ◽  
SARIT KRAUS ◽  
JACK MINKER ◽  
V. S. SUBRAHMANIAN

During the past decade, it has become increasingly clear that the future generation of large-scale knowledge bases will consist, not of one single isolated knowledge base, but a multiplicity of specialized knowledge bases that contain knowledge about different domains of expertise. These knowledge bases will work cooperatively, pooling together their varied bodies of knowledge, so as to be able to solve complex problems that no single knowledge base, by itself, would have been able to address successfully. In any such situation, inconsistencies are bound to arise. In this paper, we address the question: "Suppose we have a set of knowledge bases, KB1, …, KBn, each of which uses default logic as the formalism for knowledge representation, and a set of integrity constraints IC. What knowledge base constitutes an acceptable combination of KB1, …, KBn?"


Terminology ◽  
2019 ◽  
Vol 25 (2) ◽  
pp. 222-258 ◽  
Author(s):  
Pilar León-Araúz ◽  
Arianne Reimerink ◽  
Pamela Faber

Abstract Reutilization and interoperability are major issues in the fields of knowledge representation and extraction, as reflected in initiatives such as the Semantic Web and the Linked Open Data Cloud. This paper shows how terminological resources can be integrated and reused within different types of application. EcoLexicon is a multilingual terminological knowledge base (TKB) on environmental science that integrates conceptual, linguistic and visual information. It has led to the following by-products: (i) the EcoLexicon English Corpus; (ii) EcoLexiCAT, a terminology-enhanced translation tool; and (iii) Manzanilla, an image annotation tool. This paper explains EcoLexicon and its by-products, and shows how the latter exploit and enhance the data in the TKB.


Author(s):  
Gregory M. Mocko ◽  
David W. Rosen ◽  
Farrokh Mistree

The problem addressed in the paper is how to represent the knowledge associated with design decision models to enable storage, retrieval, and reuse. The paper concerns the representations and reasoning mechanisms needed to construct decision models of relevance to engineered product development. Specifically, AL[E][N] description logic is proposed as a formalism for modeling engineering knowledge and for enabling retrieval and reuse of archived models. Classification hierarchies are constructed using subsumption in DL. Retrieval of archived models is supported using subsumption and query concepts. In our methodology, design decision models are constructed using the base vocabulary and reuse is supported through reasoning and retrieval capabilities. Application of the knowledge representation for the design of a cantilever beam is demonstrated.


Author(s):  
Zili Zhou ◽  
Shaowu Liu ◽  
Guandong Xu ◽  
Wu Zhang

Multi-relation embedding is a popular approach to knowledge base completion that learns embedding representations of entities and relations to compute the plausibility of missing triplet. The effectiveness of embedding approach depends on the sparsity of KB and falls for infrequent entities that only appeared a few times. This paper addresses this issue by proposing a new model exploiting the entity-independent transitive relation patterns, namely Transitive Relation Embedding (TRE). The TRE model alleviates the sparsity problem for predicting on infrequent entities while enjoys the generalisation power of embedding. Experiments on three public datasets against seven baselines showed the merits of TRE in terms of knowledge base completion accuracy as well as computational complexity.


Sign in / Sign up

Export Citation Format

Share Document