knowledge engineer
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 11)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Simone Coetzer ◽  
Katarina Britz

A successful application of ontologies relies on representing as much accurate and relevant domain knowledge as possible, while maintaining logical consistency. As the successful implementation of a real-world ontology is likely to contain many concepts and intricate relationships between the concepts, it is necessary to follow a methodology for debugging and refining the ontology. Many ontology debugging approaches have been developed to help the knowledge engineer pinpoint the cause of logical inconsistencies and rectify them in a strategic way. We show that existing debugging approaches can lead to unintuitive results, which may lead the knowledge engineer to opt for deleting potentially crucial and nuanced knowledge. We provide a methodological and design foundation for weakening faulty axioms in a strategic way using defeasible reasoning tools. Our methodology draws from Rodler’s interactive ontology debugging approach and extends this approach by creating a methodology to systematically find conflict resolution recommendations. Importantly, our goal is not to convert a classical ontology to a defeasible ontology. Rather, we use the definition of exceptionality of a concept, which is central to the semantics of defeasible description logics, and the associated algorithm to determine the extent of a concept’s exceptionality (their ranking); then, starting with the statements containing the most general concepts (the least exceptional concepts) weakened versions of the original statements are constructed; this is done until all inconsistencies have been resolved.


Author(s):  
Cláudio do Rosário ◽  
Fernando Gonçalves do Amaral

Goal: This systematic review aimed at highlighting and performing an integrated discussion of the factors of the tacit knowledge elicitation process.   Design/Methodology/Approach: The research method adopted in this study was (PRISMA) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. The databases, Science Direct, Web of Science, Scopus, and Emerald Insight were chosen based on the possibility of finding articles from collections such as Elsevier, Springer e Taylor & Francis. Search terms related to “knowledge Elicitation” are the following:   "knowledge Elicitation" OR "knowledge acquisition" AND technique AND “Tacit Knowledge".   Results: The main research findings listed by the article were the inclusion of a knowledge engineer considering the SECI model, the indication of predictability in the perception of episodic knowledge in a tacit knowledge elicitation process, and the hybrid adoption of knowledge elicitation techniques. Limitations of the research: The selection criteria were based only on articles written in the English language and taking into consideration the period from 2008 to 2020. Originality/value: The structure of this article was based on the indication of theoretical gaps and the need to deepen the themes underlying the process of eliciting tacit knowledge, which allowed a systematic exposure of a broad scenario that represents the scope and complexity.


Author(s):  
SIMON VANDEVELDE ◽  
BRAM AERTS ◽  
JOOST VENNEKENS

Abstract Knowledge-based AI typically depends on a knowledge engineer to construct a formal model of domain knowledge – but what if domain experts could do this themselves? This paper describes an extension to the Decision Model and Notation (DMN) standard, called Constraint Decision Model and Notation (cDMN). DMN is a user-friendly, table-based notation for decision logic, which allows domain experts to model simple decision procedures without the help of IT staff. cDMN aims to enlarge the expressiveness of DMN in order to model more complex domain knowledge, while retaining DMNs goal of being understandable by domain experts. We test cDMN by solving the most complex challenges posted on the DM Community website. We compare our own cDMN solutions to the solutions that have been submitted to the website and find that our approach is competitive. Moreover, cDMN is able to solve more challenges than any other approach.


2021 ◽  
Vol 6 (22) ◽  
pp. 148-157
Author(s):  
Zailan Arabee Abdul Salam ◽  
Rabiah Abdul Kadir ◽  
Azreen Azman

The exponential growth of data and the boom of online businesses necessitates the need for data to be machine-readable, as humans are no longer able to manually manage the vast amounts of data. Ontologies can define concepts and relations that are amenable to processing by machines. Ontologies are created in silos and pockets of domains, and the need to merge these resources is key to universal access to multi-domain knowledge. Merging of ontologies has been explored to an extent over the last two decades, and this paper explores the extent of the tools and techniques available with a case study of merging two ontologies which are publicly available, the Person ontology and Institutional ontology, using the latest tools available on the most popular ontology editor, Protégé. It is found that automated merging tools have not been improved much over the last two decades, and the most current merging tools provided combine the two ontologies into one but do not unite or merge any of the classes or axioms which are equivalent. This can be seen in the axiom count, which does not decrease in the merged ontology, showing that no similar classes or actual axioms were merged. Protégé plugins which used to provide the semi-automatic mapping of similar classes to assist the merging process were found to be no longer available, and manual mapping by the knowledge engineer was required. This supports further research in automated ontology merging techniques.


2021 ◽  
Vol 181 (1) ◽  
pp. 71-98
Author(s):  
Stefania Costantini ◽  
Andrea Formisano

In this paper we present a methodology for introducing customizable metalogic features in logic-based knowledge representation and reasoning languages. The proposed approach is based on concepts of introspection and reflection previously introduced and discussed by various authors in relevant literature. This allows a knowledge engineer to specify enhanced reasoning engines by defining properties and meta-properties of relations as expressible for instance in OWL. We employ meta-level axiom schemata based upon a naming (reification) device. We propose general principles for extending the semantics of “host” formalisms accordingly. Consequently, suitable pre-defined libraries of properties can be made available, while user-defined new schemata are also allowed. We make the specific cases of Answer Set Programming (ASP) and Datalog±, where such features may be part of software engineering toolkits for these programming paradigms. On the one hand, concerning ASP, we extend the programming principles and practice to accommodate the proposed methodology, so as to perform meta-reasoning within the plain ASP semantics. The computational complexity of the resulting framework does not change. On the other hand, we show how metalogic features can significantly enrich Datalog± with minor changes to its operational semantics (provided in terms of “chase”) and, also in this case, no additional complexity burden.


2021 ◽  
Vol 9 (1) ◽  
pp. 1396-1405
Author(s):  
Biju Theruvil Sayed

Expert system (ES) is a branch of artificial intelligence (AI) that is used to manage different problems by making use of interactive computer-based decision-making process. It uses both factual information and heuristics to resolve the complicated decision-making issues in a specific domain. The architecture of the expert system was analyzed and found that it includes several parts such as user interface, knowledge base, working memory, inference engine, explanation system, system engineer, and knowledge engineer, user, and expert system shell in which each part of the architecture of an expert system is based on different functionary that helps it to make an adequate decision by analyzing complex situations. The research aims to analyze the application of expert systems or decision-making systems in the field of education and found that it is used for different purposes such as assessing teacher performance, providing guidance to the students regarding their career, and providing quality learning to students with disabilities. It is also used to help the students to make rightful career decisions and become efficient professionals after completing their studies.


The name BACIS combines the names basic antenatal care checklist and information systems. This is to highlight the fact that the BACIS program is an information system that implements the guidelines for maternity care in South Africa and the basic antenatal care checklist process. The BACIS program was conceptualised by the author and the study obstetrician as a tool that could be used at primary healthcare level to improve compliance to maternal health protocols and the BANC checklist. The author's role was that of knowledge engineer and software developer with the study obstetrician acting as the medical domain expert. This chapter presents the technical architecture of the BACIS program. This includes the technology used in creating the system's rule base, as well as the system's data model and software classes and its interface.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
C.W Liu ◽  
R.H Pan ◽  
Y.L Hu

Abstract Background Left ventricular hypertrophy (LVH) is associated with increased risks of cardiovascular diseases. Electrocardiography (ECG) is generally used to screen LVH in general population and electrocardiographic LVH is further confirmed by transthoracic echocardiography (Echo). Purpose We aimed to establish an ECG LVH detection system that was validated by echo LVH. Methods We collected the data of ECGs and Echo from the previous database. The voltage of R- and S-amplitude in each ECG lead were measured twice by a study assistance blinded to the study design, (artificially measured). Another knowledge engineer analyzed row signals of ECG (the algorithm). We firstly check the correlation of R- and S-amplitude between the artificially measured and the algorythm. ECG LVH is defined by the voltage criteria and Echo LVH is defined by LV mass index >115 g/m2 in men and >95 g/m2 in women. Then we use decision tree, k-means, and back propagation neural network (BPNN) with or without heart beat segmentation to establish a rapid and accurate LVH detection system. The ratio of training set to test set was 7:3. Results The study consisted of a sample size of 953 individuals (90% male) with 173 Echo LVH. The R- and S-amplitude were highly correlated between artificially measured and the algorithm R- and S-amplitude regarding that the Pearson correlation coefficient were >0.9 in each lead (the highest r of 0.997 in RV5 and the lowest r of 0.904 in aVR). Without heart beat segmentation, the accuracy of decision tree, k-means, and BPNN to predict echo LVH were 0.74, 0.73 and 0.51, respectively. With heart beat segmentation, the signal of Echo LVH expanded to 1466, and the accuracy to predict ECG LVH were obviously improved (0.92 for decision tree, 0.96 for k-means, and 0.59 for BPNN). Conclusions Our study showed that machine-learning model by BPNN had the highest accuracy than decision trees and k-means based on ECG R- and S-amplitude signal analyses. Figure 1. Three layers of the decision tree Funding Acknowledgement Type of funding source: None


2020 ◽  
Vol 34 (4) ◽  
pp. 491-500 ◽  
Author(s):  
Rafael Peñaloza

AbstractThe construction and maintenance of ontologies is an error-prone task. As such, it is not uncommon to detect unwanted or erroneous consequences in large-scale ontologies which are already deployed in production. While waiting for a corrected version, these ontologies should still be available for use in a “safe” manner, which avoids the known errors. At the same time, the knowledge engineer in charge of producing the new version requires support to explore only the potentially problematic axioms, and reduce the number of exploration steps. In this paper, we explore the problem of deriving meaningful consequences from ontologies which contain known errors. Our work extends the ideas from inconsistency-tolerant reasoning to allow for arbitrary entailments as errors, and allows for any part of the ontology (be it the terminological elements or the facts) to be the causes of the error. Our study shows that, with a few exceptions, tasks related to this kind of reasoning are intractable in general, even for very inexpressive description logics.


2020 ◽  
Vol 9 (1) ◽  
pp. 92-111
Author(s):  
Hanane Zermane ◽  
Rached Kasmi

Manufacturing automation is a double-edged sword, on one hand, it increases productivity of production system, cost reduction, reliability, etc. However, on the other hand it increases the complexity of the system. This has led to the need of efficient solutions such as artificial techniques. Data and experiences are extracted from experts that usually rely on common sense when they solve problems. They also use vague and ambiguous terms. However, knowledge engineer would have difficulties providing a computer with the same level of understanding. To resolve this situation, this article proposed fuzzy logic to know how the authors can represent expert knowledge that uses fuzzy terms in supervising complex industrial processes as a first step. As a second step, adopting one of the powerful techniques of machine learning, which is Support Vector Machine (SVM), the authors want to classify data to determine state of the supervision system and learn how to supervise the process preserving habitual linguistic used by operators.


Sign in / Sign up

Export Citation Format

Share Document