A Formal Knowledge Retrieval System for Cognitive Computers and Cognitive Robotics

Author(s):  
Yingxu Wang ◽  
Yousheng Tian

Intelligent knowledge base theories and technologies are fundamentally centric in machine learning and cognitive robotics. This paper presents the design of a formal knowledge retrieval system (FKTS) for intelligent knowledge base modeling and manipulations based on concept algebra. In order to rigorously design and implement FKTS, real-time process algebra (RTPA) is adopted to formally describe the architectures and behaviors of FKTS. The architectural model of FKTS in the form of a set of unified structure models (USMs) is rigorously described. On the basis of USMs, functional models of FKTS are hierarchically refined by a set of unified process models (UPMs). The UPMs of FFTS are divided into two subsystems known as those of the knowledge visualization and knowledge base retrieval subsystems where the content-addressed searching mechanism is implemented in knowledge bases manipulations. The FKTS system is design and implemented as a part of the cognitive learning engine (CLE) for cognitive computers and cognitive robots.

Author(s):  
Yingxu Wang

A cognitive knowledge base (CKB) is a novel structure of intelligent knowledge base that represents and manipulates knowledge as a dynamic concept network mimicking human knowledge processing. The essence of CKB is the denotational mathematical model of formal concept that is dynamically associated to other concepts in a CKB beyond conventional rule-based or ontology-based knowledge bases. This paper presents a formal CKB and autonomous knowledge manipulation system based on recent advances in neuroinformatics, concept algebra, semantic algebra, and cognitive computing. An item knowledge in CKB is represented by a formal concept, while the entire knowledge base is embodied by a dynamic concept network. The CKB system is manipulated by algorithms of knowledge acquisition and retrieval on the basis of concept algebra. CKB serves as a kernel of cognitive learning engines for cognitive robots and machine learning systems. CKB plays a central role not only in explaining the mechanisms of human knowledge acquisition and learning, but also in the development of cognitive robots, cognitive learning engines, and knowledge-based systems.


Author(s):  
Yingxu Wang

Cognitive robots are brain-inspired robots that are capable of inference, perception, and learning mimicking the cognitive mechanisms of the brain. Cognitive learning theories and methodologies for knowledge and behavior acquisition are centric in cognitive robotics. This paper explores the cognitive foundations and denotational mathematical means of cognitive learning engines (CLE) and cognitive knowledge bases (CKB) for cognitive robots. The architectures and functions of CLE are formally presented. A content-addressed knowledge base access methodology for CKB is rigorously elaborated. The CLE and CKB methodologies are not only designed to explain the mechanisms of human knowledge acquisition and learning, but also applied in the development of cognitive robots, cognitive computers, and knowledge-based systems.


Author(s):  
Yousheng Tian ◽  
Yingxu Wang ◽  
Marina L. Gavrilova ◽  
Guenther Ruhe

It is recognized that the generic form of machine learning is a knowledge acquisition and manipulation process mimicking the brain. Therefore, knowledge representation as a dynamic concept network is centric in the design and implementation of the intelligent knowledge base of a Cognitive Learning Engine (CLE). This paper presents a Formal Knowledge Representation System (FKRS) for autonomous concept formation and manipulation based on concept algebra. The Object-Attribute-Relation (OAR) model for knowledge representation is adopted in the design of FKRS. The conceptual model, architectural model, and behavioral models of the FKRS system is formally designed and specified in Real-Time Process Algebra (RTPA). The FKRS system is implemented in Java as a core component towards the development of the CLE and other knowledge-based systems in cognitive computing and computational intelligence.


Author(s):  
Yousheng Tian ◽  
Yingxu Wang ◽  
Marina L. Gavrilova ◽  
Guenther Ruhe

It is recognized that the generic form of machine learning is a knowledge acquisition and manipulation process mimicking the brain. Therefore, knowledge representation as a dynamic concept network is centric in the design and implementation of the intelligent knowledge base of a Cognitive Learning Engine (CLE). This paper presents a Formal Knowledge Representation System (FKRS) for autonomous concept formation and manipulation based on concept algebra. The Object-Attribute-Relation (OAR) model for knowledge representation is adopted in the design of FKRS. The conceptual model, architectural model, and behavioral models of the FKRS system is formally designed and specified in Real-Time Process Algebra (RTPA). The FKRS system is implemented in Java as a core component towards the development of the CLE and other knowledge-based systems in cognitive computing and computational intelligence.


Author(s):  
Michael E. Stock ◽  
Robert B. Stone ◽  
Irem Y. Tumer

When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer’s needs. Prior work indicates that similar failure modes occur within products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool is implemented during conceptual design and keeps the designer congnizant of failures that are likely to occur based on the product’s functionality. EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product’s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created and compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.


2020 ◽  
Author(s):  
Matheus Pereira Lobo

This paper is about highlighting two categories of knowledge bases, one built as a repository of links, and other based on units of knowledge.


2018 ◽  
Vol 2 ◽  
pp. e25614 ◽  
Author(s):  
Florian Pellen ◽  
Sylvain Bouquin ◽  
Isabelle Mougenot ◽  
Régine Vignes-Lebbe

Xper3 (Vignes Lebbe et al. 2016) is a collaborative knowledge base publishing platform that, since its launch in november 2013, has been adopted by over 2 thousand users (Pinel et al. 2017). This is mainly due to its user friendly interface and the simplicity of its data model. The data are stored in MySQL Relational DBs, but the exchange format uses the TDWG standard format SDD (Structured Descriptive DataHagedorn et al. 2005). However, each Xper3 knowledge base is a closed world that the author(s) may or may not share with the scientific community or the public via publishing content and/or identification key (Kopfstein 2016). The explicit taxonomic, geographic and phenotypic limits of a knowledge base are not always well defined in the metadata fields. Conversely terminology vocabularies, such as Phenotype and Trait Ontology PATO and the Plant Ontology PO, and software to edit them, such as Protégé and Phenoscape, are essential in the semantic web, but difficult to handle for biologist without computer skills. These ontologies constitute open worlds, and are expressed themselves by RDF triples (Resource Description Framework). Protégé offers vizualisation and reasoning capabilities for these ontologies (Gennari et al. 2003, Musen 2015). Our challenge is to combine the user friendliness of Xper3 with the expressive power of OWL (Web Ontology Language), the W3C standard for building ontologies. We therefore focused on analyzing the representation of the same taxonomic contents under Xper3 and under different models in OWL. After this critical analysis, we chose a description model that allows automatic export of SDD to OWL and can be easily enriched. We will present the results obtained and their validation on two knowledge bases, one on parasitic crustaceans (Sacculina) and the second on current ferns and fossils (Corvez and Grand 2014). The evolution of the Xper3 platform and the perspectives offered by this link with semantic web standards will be discussed.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Yongrui Chen ◽  
Huiying Li ◽  
Yuncheng Hua ◽  
Guilin Qi

Formal query building is an important part of complex question answering over knowledge bases. It aims to build correct executable queries for questions. Recent methods try to rank candidate queries generated by a state-transition strategy. However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries. In this paper, we propose a new formal query building approach that consists of two stages. In the first stage, we predict the query structure of the question and leverage the structure to constrain the generation of the candidate queries. We propose a novel graph generation framework to handle the structure prediction task and design an encoder-decoder model to predict the argument of the predetermined operation in each generative step. In the second stage, we follow the previous methods to rank the candidate queries. The experimental results show that our formal query building approach outperforms existing methods on complex questions while staying competitive on simple questions.


2016 ◽  
Vol 31 (2) ◽  
pp. 97-123 ◽  
Author(s):  
Alfred Krzywicki ◽  
Wayne Wobcke ◽  
Michael Bain ◽  
John Calvo Martinez ◽  
Paul Compton

AbstractData mining techniques for extracting knowledge from text have been applied extensively to applications including question answering, document summarisation, event extraction and trend monitoring. However, current methods have mainly been tested on small-scale customised data sets for specific purposes. The availability of large volumes of data and high-velocity data streams (such as social media feeds) motivates the need to automatically extract knowledge from such data sources and to generalise existing approaches to more practical applications. Recently, several architectures have been proposed for what we callknowledge mining: integrating data mining for knowledge extraction from unstructured text (possibly making use of a knowledge base), and at the same time, consistently incorporating this new information into the knowledge base. After describing a number of existing knowledge mining systems, we review the state-of-the-art literature on both current text mining methods (emphasising stream mining) and techniques for the construction and maintenance of knowledge bases. In particular, we focus on mining entities and relations from unstructured text data sources, entity disambiguation, entity linking and question answering. We conclude by highlighting general trends in knowledge mining research and identifying problems that require further research to enable more extensive use of knowledge bases.


Sign in / Sign up

Export Citation Format

Share Document