Ontologies to Lead Knowledge Intensive Evolutionary Algorithms

2016 ◽  
Vol 7 (1) ◽  
pp. 78-100 ◽  
Author(s):  
Carlos Adrian Catania ◽  
Cecilia Zanni-Merk ◽  
François de Bertrand de Beuvron ◽  
Pierre Collet

Evolutionary Algorithms (EA) have proven to be very effective in optimizing intractable problems in many areas. However, real problems including specific constraints are often overlooked by the proposed generic models. The authors' goal here is to show how knowledge engineering techniques can be used to guide the definition of Evolutionary Algorithms (EA) for problems involving a large amount of structured data, through the resolution of a real problem. They propose a methodology based on the structuring of the conceptual model underlying the problem, in the form of a labelled domain ontology suitable for optimization by EA. The case studyfocuses on the logistics involved in the transportation of patients. Although this problem belongs to the well-known family of Vehicle Routing Problems, its specificity comes from the data and constraints (cost, legal and health considerations) that must be taken into account. The precise definition of the knowledge model with thelabelled domain ontology permits the formal description of the chromosome, the fitness functions and the genetic operators.

Author(s):  
Carlos Adrian Catania ◽  
Cecilia Zanni-Merk ◽  
François de Bertrand de Beuvron ◽  
Pierre Collet

In this chapter, the authors show how knowledge engineering techniques can be used to guide the definition of evolutionary algorithms (EA) for problems involving a large amount of structured data, through the resolution of a real problem. Various representations of the fitness functions, the genome, and mutation/crossover operators adapted to different types of problems (routing, scheduling, etc.) have been proposed in the literature. However, real problems including specific constraints (legal restrictions, specific usages, etc.) are often overlooked by the proposed generic models. To ensure that these constraints are effectively considered, the authors propose a methodology based on the structuring of the conceptual model underlying the problem, as a labelled domain ontology suitable for optimization by EA. The authors show that a precise definition of the knowledge model with a labelled domain ontology can be used to describe the chromosome, the evaluation functions, and the crossover and mutation operators. The authors show the details for a real implementation and some experimental results.


2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Dazhi Jiang ◽  
Zhun Fan

At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.


In the living world, all species share one very important property, they receive it right after the birth, and it is called the survival instinct. Since the middle of the twentieth century, scientists have been applying the phenomenon in engineering in order to define computer algorithms which follow the principles of biological evolution of species. Eighty years later, scientists and engineers are still applying the phenomenon in order to solve today's most complex and wide variety of problems. This chapter introduces evolutionary algorithms and motivates the reader to start a journey into genetic programming (GP). The chapter starts with the introduction and detailed insights into GP by describing its main parts and terminology in order to define and mimic biological terms with terms in genetic programming. Then the reader is introduced with the historical evolution of GP and the main and the most popular genetic programming variants, it may find dozens of cited references in it. The chapter continues with detailed introduction on the chromosomes, population, initial and selection methods, main genetic operators, various types of fitness functions, termination criteria, etc. Since GP is processor intensive algorithm, it requires parallel execution to increase its performance which is described at the end of the chapter.


Author(s):  
BURTON H. LEE

Product design and diagnosis are, today, worlds apart. Despite strong areas of overlap at the ontological level, traditional design process theory and practice does not recognize diagnosis as a part of the modeling process chain; neither do diagnosis knowledge engineering processes reference design modeling tasks as a source of knowledge acquisition. This paper presents the DAEDALUS knowledge engineering framework as a methodology for integrating design and diagnosis tasks, models, and modeling environments around a common Domain Ontology and Product Models Library. The approach organizes domain knowledge around the execution of a set of tasks in an enterprise product engineering task workflow. Each task employs a Task Application which uses a customized subset of the Domain Ontology—the Task Ontology—to construct a graphical Product Model. The Ontology is used to populate the models with relevant concepts (variables) and relations (relationships), thus serving as a concept dictionary-style mechanism for knowledge sharing and reuse across the different Task Applications. For inferencing, each task employs a local Problem-solving Method (PSM), and a Model-PSM Mapping, which operate on the local Product Model to produce reasoning outcomes. The use of a common Domain Ontology across tasks and models facilitates semantic consistency of variables and relations in constructing Bayesian networks for design and diagnosis.The approach is motivated by inefficiencies encountered in cleanly exchanging and integrating design FMEA and diagnosis models. Demonstration software under development is intended to illustrate how the DAEDALUS framework can be applied to knowledge sharing and exchange between Bayesian network-based design FMEA and diagnosis modeling tasks. Anticipated limitations of the DAEDALUS methodology are discussed, as is its relationship to Tomiyama's Knowledge Intensive Engineering Framework (KIEF). DAEDALUS is grounded in formal knowledge engineering principles and methodologies established during the past decade. Finally, the framework is presented as one possible approach for improved integration of generalized design and diagnostic modeling and knowledge exchange.


2021 ◽  
Author(s):  
William F. Quintero-Restrepo ◽  
Brian K. Smith ◽  
Junfeng Ma

Abstract The efficient creation of 3D CAD platforms can be achieved by the optimization of their design process. The research presented in this article showcases a method for allowing such efficiency improvement. The method is based on the DMADV six sigma approach. During the Define step, the definition of the scope and design space is established. In the Measure step, the initial evaluation of the platforms to be improved is done with the help of a Metrics framework for 3D CAD platforms. The Analyze Step includes the identification and optimization of the systems’ model of the process based on the architecture and the multiple objectives required for the improvement. The optimization method used that is based on evolutionary algorithms allows for the identification of the best improvement alternatives for the next step. During Design step of the method, the improvement alternatives are planned and executed. In the final Verification step, the evaluation of the improved process is tested against the previous status with the help of the Metrics Framework for 3D CAD platforms. The method is explained with an example case of a 3D CAD platform for creating metallic boxes for electric machinery.


Author(s):  
S. SOLODOVNICOV.

The article is devoted to the theoretical substantiation of a new social paradigm – risk economy. The current stage of society development and the economy is characterized by a critical increase in financial, technological and technological, political and economic, geo-economic and other uncertainties. It is impossible to understand their ontological nature and reveal the phenomenological specificity without a meaningful definition of the current stage of development of the economic system of society. The article consistently revealed the characteristics of current society, which allowed the author to present a new political and economic concept that characterizes the current stage of development of society and the economy – the risk economy. The risk economy is an economy of high-tech and knowledge-intensive industries, characterized by the highest degree of political, economic, technological, financial and environmental uncertainties and risks. These risks are becoming comprehensive, many of them are in principle unpredictable, and their possible negative consequences could lead Humanity to a global catastrophe. Understanding the nature of risk economics is critically important for developing effective political and economic mechanisms to counter these risks.


2020 ◽  
Vol 2 (1) ◽  
pp. 32-35
Author(s):  
Eric Holloway

Leonid Levin developed the first stochastic conservation of information law, describing it as "torturing an uninformed witness cannot give information about the crime."  Levin's law unifies both the deterministic and stochastic cases of conservation of information.  A proof of Levin's law from Algorithmic Information Theory is given as well as a discussion of its implications in evolutionary algorithms and fitness functions.


Author(s):  
Laura L. Liptai

The Scientific Method Is Utilized In Order To Understand The Relationship Among Observations Of Physical Phenomena, While Minimizing The Influence Of Human Bias And Maximizing Objectivity. Specific Procedures For The Application Of The Scientific Method Vary From One Field Of Science To Another, But The Investigative Technique Universally Provides For An Analytical Framework To Acquire, Collect And/Or Integrate Knowledge. Engineering Forensics Involves The Analysis Of The Parameters Or Cause(S) Of Incidents Or Failures And/Or Hypothetical Prevention Methods. Engineering Analysis Of Forensic Problems Is A Multifaceted, Multidisciplinary Pursuit That Is Often Wide In Scope. Forensic Engineering Generally Applies Existing Science In Conjunction With The Knowledge, Education, Experience, Training And Skill Of The Practitioner To Seek Solution(S). The Scientific Method, Including Definition Of A Null Hypothesis, Is Rarely Utilized In Forensics As New Science Is Rarely Required. A Forensic Engineering Investigation Typically Involves The Application Of Long Established Science (Newtons Laws, For Example). Forensic Engineering Encompasses The Systematic Search For Knowledge Necessitating The Observation And Definition Of A Problem; The Collection Of Data Through Observation, Research, Experimentation And/Or Calculation; The Analysis Of Data; And The Development And Evaluation Of Findings And Opinions. The Ultimate Objective Of A Forensic Engineering Investigation Is Uncompromised Data Collection And Systematically Considered, Iteratively Derived And Objectively Balanced Conclusions.


2020 ◽  
Author(s):  
Margot Barry ◽  
Wietske Kuijer ◽  
Anke Persoon ◽  
Loek Nieuwenhuis ◽  
Nynke Scherpbier

Abstract Background: Twelve clinician-scientists were employed in a Dutch academic network, which is a collaboration between fifteen nursing-homes and an academic medical research institute. The clinician-scientists were tasked with linking research and clinical practice by catalysing both care-informed research and evidence-informed implementation initiatives. The clinician-scientists and their manager experienced difficulties in clearly defining the knowledge broker role of the clinician-scientists, a difficulty also reported in literature. They found no tools and methods suitable for make their knowledge broker role visible. Clarifying role expectations and accountability for funding these knowledge broker positions was difficult. They aimed to design a theory-informed performance appraisal tool that allowed clinician-scientists to explicate and develop their knowledge broker role in collaboration with management.Methods: A participatory design research was conducted over a 21 month period with a design group consisting of an external independent researcher, clinician-scientists and their managers from within the academic network. Results: A tool (the SP-tool) was developed in MS Excel. This allowed clinician-scientists to log their knowledge broker activities as distinct from their clinical work and research related activities. The tool contributed to their ability to make their knowledge broker role visible to themselves and their stakeholders. The theoretic contribution of the design research is a conceptual model of professionalisation of the clinician scientists knowledge broker role. This model presents the relationship between work visibility and the clarification of functions of the clinician-scientist’s knowledge broker role. In the professionalisation of knowledge-intensive work, visibility contributes to the definition of CS broker functions, which is an element necessary for the professionalisation of an occupation.Conclusions: The CSs knowledge broker role is a knowledge-intensive role and work-tasks associated with this role are not automatically visible. The SP-tool contributes to creating work visibility of the clinician-scientists’ knowledge broker role. This in turn could contribute to the professionalisation of this role, which is not well described in literature at the day-to-day professional level.


Author(s):  
Elena Irina Neaga

This chapter deals with a roadmap on the bidirectional interaction and support between knowledge discovery (Kd) processes and ontology engineering (Onto) mainly directed to provide refined models using common methodologies. This approach provides a holistic literature review required for the further definition of a comprehensive framework and an associated meta-methodology (Kd4onto4dm) based on the existing theories, paradigms, and practices regarding knowledge discovery and ontology engineering as well as closely related areas such as knowledge engineering, machine/ontology learning, standardization issues and architectural models. The suggested framework may adhere to the Iso-reference model for open distributed processing and Omg-model-driven architecture, and associated dedicated software architectures should be defined.


Sign in / Sign up

Export Citation Format

Share Document