scholarly journals GERIAUSIO PAAIŠKINIMO IŠVEDIMAS. TARP DEDUKCIJOS, INDUKCIJOS IR ABDUKCIJOS

Problemos ◽  
2009 ◽  
Vol 76 ◽  
pp. 150-161
Author(s):  
Adolfas Mackonis

Geriausio paaiškinimo išvedimas (GPI) išskiriamas kaip pagrindinė mokslo hipotezes ir teorijas atrandanti ir pagrindžianti samprotavimo forma. Straipsnyje tiriamas GPI ir jo santykis su pagrindinėmis samprotavimo rūšimis: dedukcija, indukcija ir abdukcija. GPI pasižymi abdukcijos samprotavimo mechanizmu, tačiau, priešingai nei abdukcija, GPI teikia ne galimą, bet esą teisingą išvadą. GPI yra induktyvus plačiąja prasme samprotavimas, nes jis nepatenkina dedukcijos taisyklių ir jo išvadai nepakanka duomenų. Straipsnyje teigiama, jog nepaisant pastarųjų GPI ypatumų, kurie rodo, kad GPI nėra ir negali būti deduktyviu samprotavimu, GPI reiškia pretenzijas į savo išvados absoliutų teisingumą, t. y. tvirtinamas kone deduktyvus GPI išvados pagrįstumas.Pagrindiniai žodžiai: geriausio paaiškinimo išvedimas, dedukcija, indukcija, abdukcija.Inference to the Best Explanation. Among Deduction, Induction and AbductionAdolfas Mackonis   SummaryInference to the best explanation (IBE) is considered to be the main means of discovery and justification of scientific hypotheses and theories. The article investigates this inference and its relationship to the main kinds of inference: deduction, induction and abduction. IBE has an abductive inference mechanism, but, contrary to abduction, infers not a possible, but a true conclusion. IBE is an inductive inference, because it is underdetermined by the rules of deduction and by evidence. The article claims that despite its abductive and inductive features which demonstrate that it is not and cannot be deductive inference, IBE nevertheless makes pretense to an absolute truth of its inference, i.e. claims for an almost deductive validity.Keywords: inference to the best explanation, deduction, induction, abduction.px;"> 

1976 ◽  
Vol 6 (3) ◽  
pp. 561-568
Author(s):  
Douglas Odegard

Edmund Gettier objects to analysing knowledge as justified true belief (JTB) on the ground that someone can justifiably infer a true conclusion from a justified false premise and hence not know the conclusion's truth, although the conclusion is justified. For instance, someone can justifiably deduce a true p v r from a justified but false p, where he has no justification for the true r. Gettier's objection draws on two assumptions: first, that a justified belief can be false; second, that a premise can justify a conclusion even though the premise is false.Some JTB advocates grant the first assumption but deny the second. They usually concede the first assumption to protect the respectability of non-deductive inference. The argument is that if evidence e can nondeductively justify the conclusion c, then it must be possible for c to be justified and yet false, since e does not entail c. Although the assumption is sound, the argument as it stands fails to show it. But let us set this point aside for the moment.


10.29007/wkvm ◽  
2018 ◽  
Author(s):  
Florian Craciun ◽  
Chenguang Luo ◽  
Guanhua He ◽  
Shengchao Qin ◽  
Wei-Ngan Chin

We study automated verification of pointer safety for heap-manipulating imperative programs with unknown procedure calls or code pointers. Given the specification of a procedure whose body contains calls to an unknown procedure, we try to infer the possible specifications for the unknown procedure from its calling contexts. We employ a forward shape analysis with separation logic and an abductive inference mechanism to synthesize both pre- and postconditions for the unknown procedure. The inferred specification is a partial specification of the unknown procedure. Therefore it is subject to a later verification when the code or the complete specification for the unknown procedure are available. Our inferred specifications can also be used for program understanding.


Author(s):  
Yazmín Ibáñez-García ◽  
Víctor Gutiérrez-Basulto ◽  
Steven Schockaert

Description logics (DLs) are standard knowledge representation languages for modelling ontologies, i.e. knowledge about concepts and the relations between them. Unfortunately, DL ontologies are difficult to learn from data and time-consuming to encode manually. As a result, ontologies for broad domains are almost inevitably incomplete. In recent years, several data-driven approaches have been proposed for automatically extending such ontologies. One family of methods rely on characterizations of concepts that are derived from text descriptions. While such characterizations do not capture ontological knowledge directly, they encode information about the similarity between different concepts that can be exploited for filling in the gaps in existing ontologies. To this end, several inductive inference mechanisms have already been proposed, but these have been defined and used in a heuristic fashion. In this paper, we instead propose an inductive inference mechanism which is based on a clear model-theoretic semantics, and can thus be tightly integrated with standard deductive reasoning. We particularly focus on interpolation, a powerful commonsense reasoning mechanism which is closely related to cognitive models of category-based induction. Apart from the formalization of the underlying semantics, as our main technical contribution we provide computational complexity bounds for reasoning in EL with this interpolation mechanism.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-29
Author(s):  
Zhe Zhou ◽  
Robert Dickerson ◽  
Benjamin Delaware ◽  
Suresh Jagannathan

Programmers often leverage data structure libraries that provide useful and reusable abstractions. Modular verification of programs that make use of these libraries naturally rely on specifications that capture important properties about how the library expects these data structures to be accessed and manipulated. However, these specifications are often missing or incomplete, making it hard for clients to be confident they are using the library safely. When library source code is also unavailable, as is often the case, the challenge to infer meaningful specifications is further exacerbated. In this paper, we present a novel data-driven abductive inference mechanism that infers specifications for library methods sufficient to enable verification of the library's clients. Our technique combines a data-driven learning-based framework to postulate candidate specifications, along with SMT-provided counterexamples to refine these candidates, taking special care to prevent generating specifications that overfit to sampled tests. The resulting specifications form a minimal set of requirements on the behavior of library implementations that ensures safety of a particular client program. Our solution thus provides a new multi-abduction procedure for precise specification inference of data structure libraries guided by client-side verification tasks. Experimental results on a wide range of realistic OCaml data structure programs demonstrate the effectiveness of the approach.


Author(s):  
Arkadij Zakrevskij

The theory of Boolean functions, especially in respect to representing these functions in the disjunctive or conjunctive normal forms, is extended in this chapter onto the case of finite predicates. Finite predicates are decomposed by that into some binary units, which will correspond to components of Boolean vectors and matrices and are represented as combinations of these units. Further, the main concepts used for solving pattern recognition problems are defined, namely world model, data, and knowledge. The data presenting information about the existence of some objects with definite combinations of properties is considered, as well as the knowledge presenting information about the existence of regular relationships between attributes. These relationships prohibit some combinations of properties. In this way, the knowledge gives the information about the non-existence of objects with some definite (prohibited) combinations of attribute values. A special form of regularity representation, called implicative regularities, is introduced. Any implicative regularity generates an empty interval in the Boolean space of object descriptions, which do not contradict the data. The problem of plausibility evaluation of induced implicative regularities should be solved by that. The pattern recognition problem is solved by two steps. First, regularities are extracted from the database (inductive inference); second, the obtained knowledge is used for the object recognition (deductive inference).


1989 ◽  
Vol 28 (02) ◽  
pp. 69-77 ◽  
Author(s):  
R. Haux

Abstract:Expert systems in medicine are frequently restricted to assisting the physician to derive a patient-specific diagnosis and therapy proposal. In many cases, however, there is a clinical need to use these patient data for other purposes as well. The intention of this paper is to show how and to what extent patient data in expert systems can additionally be used to create clinical registries and for statistical data analysis. At first, the pitfalls of goal-oriented mechanisms for the multiple usability of data are shown by means of an example. Then a data acquisition and inference mechanism is proposed, which includes a procedure for controlling selection bias, the so-called knowledge-based attribute selection. The functional view and the architectural view of expert systems suitable for the multiple usability of patient data is outlined in general and then by means of an application example. Finally, the ideas presented are discussed and compared with related approaches.


Author(s):  
Jacob Stegenga

This chapter introduces the book, describes the key arguments of each chapter, and summarizes the master argument for medical nihilism. It offers a brief survey of prominent articulations of medical nihilism throughout history, and describes the contemporary evidence-based medicine movement, to set the stage for the skeptical arguments. The main arguments are based on an analysis of the concepts of disease and effectiveness, the malleability of methods in medical research, and widespread empirical findings which suggest that many medical interventions are barely effective. The chapter-level arguments are unified by our best formal theory of inductive inference in what is called the master argument for medical nihilism. The book closes by considering what medical nihilism entails for medical practice, research, and regulation.


Sign in / Sign up

Export Citation Format

Share Document