scholarly journals Developing an Ontology Evaluation Methodology: Cognitive Measure of Quality

2021 ◽  
Author(s):  
◽  
Chia-wen Fang

<p>Ontologies are formal specifications of shared conceptualizations of a domain. Important applications of ontologies include distributed knowledge based systems, such as the semantic web, and the evaluation of modelling languages, e.g. for business process or conceptual modelling. These applications require formal ontologies of good quality. In this thesis, we present a multi-method ontology evaluation methodology, which consists of two techniques (sentence verification task and recall) based on principles of cognitive psychology, to test how well a specification of a formal ontology corresponds to the ontology users' conceptualization of a domain. Two experiments were conducted, each evaluating the SUMO ontology and WordNet with an experimental technique, as demonstrations of the multi-method evaluation methodology. We also tested the applicability of the two evaluation techniques by conducting a replication study for each. The replication studies obtained findings that point towards the same direction as the original studies, although no significance was achieved. Overall, the evaluation using the multi-method methodology suggests that neither of the two ontologies we examined is a good specification of the conceptualization of the domain. Both the terminology and the structure of the ontologies, may benefit from improvement.</p>

2021 ◽  
Author(s):  
◽  
Chia-wen Fang

<p>Ontologies are formal specifications of shared conceptualizations of a domain. Important applications of ontologies include distributed knowledge based systems, such as the semantic web, and the evaluation of modelling languages, e.g. for business process or conceptual modelling. These applications require formal ontologies of good quality. In this thesis, we present a multi-method ontology evaluation methodology, which consists of two techniques (sentence verification task and recall) based on principles of cognitive psychology, to test how well a specification of a formal ontology corresponds to the ontology users' conceptualization of a domain. Two experiments were conducted, each evaluating the SUMO ontology and WordNet with an experimental technique, as demonstrations of the multi-method evaluation methodology. We also tested the applicability of the two evaluation techniques by conducting a replication study for each. The replication studies obtained findings that point towards the same direction as the original studies, although no significance was achieved. Overall, the evaluation using the multi-method methodology suggests that neither of the two ontologies we examined is a good specification of the conceptualization of the domain. Both the terminology and the structure of the ontologies, may benefit from improvement.</p>


Author(s):  
Michael Shaughnessy

From 1980 to 2000, there were many articles written on the subject of software review and evaluation. Upon initial investigation of educational software methodologies, it appears that there are as many evaluation methodologies as there are authors presenting them. Several articles (methodology analyses) have been written describing these evaluation techniques (Bryson & Cullen, 1984; Eraut, 1989; Holznagel, 1983; Jones et al., 1999; McDougall & Squires, 1995; Reiser & Kegelmann, 1994, 1996; Russell & Blake, 1988). Each of these articles describes various methodologies and presents the most current evaluation methodology available, but fails to provide a complete history of the types of evaluation methodologies. These analyses of evaluation methodologies focus on the individual methodology, but refrain from putting individual methodologies into a greater systematic context.


Author(s):  
Thomas J. Hagedorn ◽  
Sundar Krishnamurty ◽  
Ian R. Grosse

Additive manufacturing (AM) offers significant opportunities for product innovation in many fields provided that designers are able to recognize the potential values of AM in a given product development process. However, this may be challenging for design teams without substantial experience with the technology. Design inspiration based on past successful applications of AM may facilitate application of AM even in relatively inexperienced teams. While designs for additive manufacturing (DFAM) methods have experimented with reuse of past knowledge, they may not be sufficient to fully realize AM's innovative potential. In many instances, relevant knowledge may be hard to find, lack context, or simply unavailable. This design information is also typically divorced from the underlying logic of a products' business case. In this paper, we present a knowledge based method for AM design ideation as well as the development of a suite of modular, highly formal ontologies to capture information about innovative uses of AM. This underlying information model, the innovative capabilities of additive manufacturing (ICAM) ontology, aims to facilitate innovative use of AM by connecting a repository of a business and technical knowledge relating to past AM products with a collection of knowledge bases detailing the capabilities of various AM processes and machines. Two case studies are used to explore how this linked knowledge can be queried in the context of a new design problem to identify highly relevant examples of existing products that leveraged AM capabilities to solve similar design problems.


2015 ◽  
Vol 1 (1) ◽  
pp. 153-160
Author(s):  
Mihail Aurel Ţîţu ◽  
Constantin Oprean ◽  
Andreea Simina Răulea ◽  
Ştefan Ţîţu

AbstractThe intellectual property is a concept of whose content and materialization find themselves more and more in the attention of the researchers and practitioners. The increased number of the works that approach such an issue is the argument that supports the previous affirmation. The intellectual property assets attract the interest of all the organizations from the local to the global level. The important pillars of the European Strategy 2020 formulated by the European Commission are based on the capitalization of the innovation knowledge and of the intellectual property. The increased interest towards innovation and intangible assets is given to the awareness of their economic potential. This is the reason why the evaluation and the valuation of the intellectual property capitalization propose an evaluation methodology unanimously accepted. The aim of this article is to present a visualization and evaluation instrument for the intellectual property assets, realized in a framework of a European research project with 15 partners from countries that are situated in the South Eastern Europe.


2009 ◽  
Vol 12 (1) ◽  
Author(s):  
Marta E. Calderón

Human-computer interaction is a very recent discipline at the Universidad de Costa Rica. Inthis paper we present the experiences of the first academic year the first courses about humancomputerinteraction, an undergraduate course and a Masters course, were designed andtaught. The HCI course introduction strategy consisted of two steps: 1) to initiate a dedicatedundergraduate course during the first term, and 2) to initiate a dedicated Masters courseduring the second term, simultaneously taught with the undergraduate course. Both coursesshare the outline. However, due to differences among undergraduate and graduate studentsand among undergraduate and Masters courses, evaluation methodology differences wereimplemented, resulting in more assignments and a higher exigency level for graduatestudents. Work in the classroom is different for each of the courses, because graduate studentscan build their own knowledge based on their previous working experience and on theexchange of ideas with other students. In both undergraduate and Masters courses, emphasisis set on practice supported by theory.


Author(s):  
Khalid Salmi ◽  
Hamid Magrez ◽  
Abdelhak Ziyyat

In order to maintain the training quality and ensure efficient learning, the introduction of a scalable and well-adapted evaluation system is essential. An adequate evaluation system will positively involve students in the evaluation of their own learning, as well as providing teachers with indicators on the student's strengths, the specific encountered difficulties and the false or misunderstood studied parts. In this context, we present, in this article, a novel intelligent evaluation methodology based on fuzzy logic and knowledge based expert systems. The principle of this methodology is to reify abstract concepts of a human expertise in a numerical inference engine applied to evaluation. It reproduces, therefore, the cognitive mechanisms of evaluation experts. An im-plementation example is presented to compare this method with the classical one and draw conclusions about its efficiency. Furthermore, thanks to its flexibility, different kinds of extensions are possible by updating the basic rules and adjusting to possible new architectures and new types of evaluation.


1995 ◽  
Vol 10 (4) ◽  
pp. 331-343 ◽  
Author(s):  
Pedro Meseguer ◽  
Alun D. Preece

AbstractThis paper examines how formal specification techniques can support the verification and validation (V&V) of knowledge-based systems. Formal specification techniques provide levels of description which support both verification and validation, and V&V techniques feed back to assist the development of the specifications. Developing a formal specification for a system requires the prior construction of a conceptual model for the intended system. Many elements of this conceptual model can be effectively used to support V&V. Using these elements, the V&V process becomes deeper and more elaborate, and it produces results of a better quality compared with the V&V activities which can be performed on systems developed without conceptual models. However, we note that there are concerns in using formal specification techniques for V&V, not least being the effort involved in creating the specifications.


1996 ◽  
Vol 11 (3) ◽  
pp. 253-280 ◽  
Author(s):  
Christine Pierret-Golbreich ◽  
Xavier Talon

AbstractTFL, the Task Formal Language, has been developed for integrating the static and dynamic aspects of knowledge based systems. This paper focuses on the formal specification of dynamic behaviour. Although fundamental in knowledge based systems, strategic reasoning has been rather neglected until now by the existing formal specifications. Most languages were generally more focused on the domain and problem-solving knowledge specification than on the control. The formalisation presented here differs from previous ones in several aspects. First, a different representation of dynamic knowledge is proposed: TFL is based on Algebraic Data Types, as opposed to dynamic or temporal logic. Second, dynamic strategic reasoning is emphasised, whereas existing languages only offer to specify algorithmic control. Then, TFL does not only provide the specification of the problem-solving knowledge of the object system, but also of its strategic knowledge. Finally, the dynamic knowledge of the meta-system itself is also specified. Moreover, modularisation is another important feature of the presented language.


2021 ◽  
Author(s):  
Luke T Slater ◽  
John A Williams ◽  
Andreas Karwath ◽  
Hilary Fanning ◽  
Simon Ball ◽  
...  

Identification of ontology concepts in clinical narrative text enables the creation of phenotype profiles that can be associated with clinical entities, such as patients or drugs. Constructing patient phenotype profiles using formal ontologies enables their analysis via semantic similarity, in turn enabling the use of background knowledge in clustering or classification analyses. However, traditional semantic similarity approaches collapse complex relationships between patient phenotypes into a unitary similarity scores for each pair of patients. Moreover, single scores may be based only on matching terms with the greatest information content (IC), ignoring other dimensions of patient similarity. This process necessarily leads to a loss of information in the resulting representation of patient similarity, and is especially apparent when using very large text-derived and highly multi-morbid phenotype profiles. Moreover, it renders finding a biological explanation for similarity very difficult; the black box problem. In this article, we explore the generation of multiple semantic similarity scores for patients based on different facets of their phenotypic manifestation, which we define through different sub-graphs in the Human Phenotype Ontology. We further present a new methodology for deriving sets of qualitative class descriptions for groups of entities described by ontology terms. Leveraging this strategy to obtain meaningful explanations for our semantic clusters alongside other evaluation techniques, we show that semantic clustering with ontology-derived facets enables the representation, and thus identification of, clinically relevant phenotype relationships not easily recoverable using overall clustering alone. In this way, we demonstrate the potential of faceted semantic clustering for gaining a deeper and more nuanced understanding of text-derived patient phenotypes.


1990 ◽  
Vol 5 (4) ◽  
pp. 215-221 ◽  
Author(s):  
Beat Hochstrasser

This paper presents part of a three-year Kobler Unit study into current practices of managing IT investments involving 60 managers from 34 British companies. Guidelines were collected to assess the true costs of deploying IT, taking into account technological costs, human costs and organizational costs. By identifying examples of best practice, an evaluation methodology was then developed which concentrates on both the primary objectives of systems and on the inevitable second-order effects resulting from the broader human and organizational impact. The methodology is eclectic in that it matches specific evaluation techniques to distinct classes of IT projects.


Sign in / Sign up

Export Citation Format

Share Document