logical inference
Recently Published Documents


TOTAL DOCUMENTS

256
(FIVE YEARS 66)

H-INDEX

18
(FIVE YEARS 2)

2021 ◽  
Vol 11 (4) ◽  
pp. 521-532
Author(s):  
A.A. Zuenko ◽  

Within the Constraint Programming technology, so-called table constraints such as typical tables, compressed tables, smart tables, segmented tables, etc, are widely used. They can be used to represent any other types of constraints, and algorithms of the table constraint propagation (logical inference on constraints) allow eliminating a lot of "redundant" values from the domains of variables, while having low computational complexity. In the previous studies, the author proposed to divide smart tables into structures of C- and D-types. The generally accepted methodology for solving con-straint satisfaction problems is the combined application of constraint propagation methods and backtracking depth-first search methods. In the study, it is proposed to integrate breadth-first search methods and author`s method of table con-straint propagation. D-type smart tables are proposed to be represented as a join of several orthogonalized C-type smart tables. The search step is to select a pair of C-type smart tables to be joined and then propagate the restrictions. To de-termine the order of joining orthogonalized smart tables at each step of the search, a specialized heuristic is used, which reduces the search space, taking into account further calculations. When the restrictions are extended, the acceleration of the computation process is achieved by applying the developed reduction rules for the case of C-type smart tables. The developed hybrid method allows one to find all solutions to the problems of satisfying constraints modeled using one or several D-type smart tables, without decomposing tabular constraints into elementary tuples.


2021 ◽  
Vol - (4) ◽  
pp. 142-152
Author(s):  
Nataliia Viatkina

The phenomenon of memory is considered as a component implicitly present in the process of information communication. A short typology of memory kinds that form a referential field around so-called semantic memory is given. Through the approaches of Yu. Lotman and R. Jacobson, the classical notion of time is considered through the relationship "Past-Present-Future", which is closely related to the problems of memory. The focus is on how could the memory be considered within logic and by means of logic? As one of the ways of solving the point, to apply the tools of tense logic to the problem through the analysis of the works of Anatoly Ishmuratov (1946–2017), a prominent Ukrainian logician, is proposed. The classifications of the tenses by O. Jespersen, H. Reichenbach, W. Bull are considered. The subjective and objective axes of orientation, which form the basis of calendars, charts and scales, as fragments of time, are analyzed. According to Ishmuratov, these instruments – schemes, diagrams, etc. can be considered languages. The possibility of language objectification of meaning determines the relationship of logical inference, and thus the structuring of semantic areas, which are memories, by means the language of logic and in accordance with its structures. Through the study of logical and cognitive conditions of action, A. Ishmuratov continued to develop ideas of tense logic and their application to explain the psychological perception of objective time. He construed a scheme of semantic connections of memory as a mental act that reproduces the life path of the individual; he distinguished between memories and "pseudo-memories", which together influence the reassessment of past events, shape the individual`s experience and his ability to construe alternatives to the future and reasoning about it. A special place in A. Ishmuratov's research is occupied by his explication of temporal three-valued logic and application of temporal modalities to the analysis of so-called transient states. Further study of such approaches could help to make sense of rational explication of memory, testimonies, reminiscences of past events and give interesting results.


Author(s):  
Lev Raskin ◽  
Larysa Sukhomlyn ◽  
Yuriy Ivanchikhin ◽  
Roman Korsun

The subject of consideration is the task of identifying the states of an object based on the results of fuzzy measurements of a set of controlled parameters. The fuzziness of the initial data of the task further complicates it due to the resulting inequality of the controlled parameters. The aim of the study is to develop a method of identifying the states of a fuzzy object using a fuzzy mechanism of logical output taking into account possible differences in the level of information content of its controlled parameters. The method of obtaining the desired result is based on the modification of the known mathematical apparatus for building an expert system of artificial intelligence by solving two subtasks. The first is the development of a method for assessing the informativity of controlled parameters. The second is the development of a method for constructing a mechanism for logical inference of the relative state of an object based on the results of measuring controlled parameters, which provides identification. In the first problem, a method is proposed for estimating the informativity of parameters, free from the known disadvantages of the traditional Kulbak informativity measure. In implementing the method, it is assumed that the range of possible values for each parameter is divided into subbands in accordance with possible states of the object. For each of these states, the function of belonging to the fuzzy values  of the corresponding parameter is defined. At the same time, the correct problem of estimating the informativity of a parameter is solved for cases when this parameter is measured accurately or determined fuzzily by its belonging function. The fundamental difference between the proposed logical output mechanism and the traditional one is the refusal to use the production rule base, which ensures the practical independence of the computational procedure from the dimension of the task. To solve the main problem of identifying states, a non-productive approach is proposed, the computational complexity of which practically does not depend on the dimension of the problem (the product of the number of possible states Results.per the number of controlled parameters). The logic output mechanism generates a probability distribution of the system states. In this case, a set of functions of belonging of each parameter to the range of its possible values for each of the states of the object is used, as well as a set of functions of belonging to fuzzy measurement results of each parameter. Conclusions. Thus, a method of identifying the state of fuzzy objects with a fuzzy non-productive output mechanism is proposed, the complexity of which does not depend on the dimension of the task.


2021 ◽  
Vol 1 (2) ◽  
pp. 247-252
Author(s):  
AHMAD SYUGIYANTO

This study aims to determine the percentage of generic science skills in prospective students of biology education teachers at FKIP Uhamka. The subjects of this study were student teacher candidates for the 6th semester of the 2018/2019 academic year on blood coagulation practicum material. The type of research is descriptive analysis, using the test method. The sample of this study consisted of 3 classes with a total of 58 students in the class of 2016 obtained using the saturated sampling technique. The data collection technique used is a description test. Based on the results of direct observation of 98.83% (very good), Symbolic language 38.79% (very poor), Modeling 90.95% (very good), Logical Inference 39.51% (very poor), Logical framework 94.90% (very good). By analyzing the data, the overall average is 72.6% in the medium category. This is because some students are actively involved in the ongoing practicum process while some are passive in the ongoing practicum process so that it shows moderate results. ABSTRAKPenelitian ini bertujuan untuk mengetahui persentase keterampilan generik sains pada mahasiswa calon guru pendidikan biologi di FKIP Uhamka. Subjek penelitian ini adalah mahasiswa calon-calon guru semester 6 tahun ajar 2018/2019 pada materi praktikum koagulasi darah. Jenis penelitian yakni análisis deskriptif, dengan menggunakan metode tes. Sampel penelitian ini terdiri dari 3 kelas dengan jumlah 58 mahasiswa pada tahun angkatan 2016 yang diperoleh menggunakan teknik sampling jenuh. Teknik pengumpulan data yang digunakan yaitu tes uraian. Berdasarkan hasil penelitian pengamatan langsung sebesar 98.83% (sangat baik), Bahasa simbolik 38.79% (kurang sekali), Pemodelan 90.95% (baik sekali), Inferensi Logika 39.51% (kurang sekali), Kerangka logika 94.90% (baik sekali). Dengan análisis data yang diperoleh rerata keseluruhan adalah 72.6% dengan kategori sedang. Hal ini karena sebagian mahasiswa aktif terlibat dalam proses praktikum berlangsung sedangkan sebagiannya pasif dalam proses praktikum berlangsung sehingga menunjukan hasil sedang.


2021 ◽  
Vol 5 (11) ◽  
pp. 1540
Author(s):  
Putri Ismayana ◽  
Gunadi Harry Sulistyo ◽  
Primardiana Hermilia Wijayati

<p><strong>Abstract: </strong>This study focuses on developing a prototype of an assessment program on reading comprehension based on computerized dynamic assessment. The reading skills to be measure include identifying topic, main idea, the detail of the text, logical inference, an assumption, word meaning and synonym, and a conclusion. The assessment consists of two types of tests including multiple choice type and cloze procedures. Those tests contain prompts as the characteristic of dynamic assessment. The participants in this study were 316 eleventh grade students of vocational high schools. The result reveals that the product was positively agreed by most of the subjects despite the fact that they were not familiar with this assessment program. This indicates that the developed product was acceptable by eleventh grade students on vocational high schools.</p><p><strong>Abstrak:</strong> Penelitian ini fokus pada pengembangan prototipe program penilaian pada membaca komprehensif yang berdasarkan pada <em>computerized dynamic assessment</em>. Kemampuan membaca yang akan diukur pada tes ini, meliputi identifikasi topik, ide pokok, informasi rinci dalam teks, referensi, identifikasi anggapan, memahami makna persamaan kata, dan menyimpulkan. Produk asesmen ini terdiri dari dua tipe teS, yaitu pilihan ganda dan <em>cloze procedures</em>. Tes tersebut mengandung petunjuk/saran sebagai karakteristik dari <em>dynamic assessment</em>. Peserta yang terlibat pada penelitian ini terdiri dari 316 siswa kelas sebelas sekolah menengah kejuruan. Hasil dari pengembangan produk program penilaian <em>computerized dynamic assessment</em> sangat diterima dengan baik dan hampir dari seluruh peserta tidak mengenal program penilaian <em>computerized dynamic assessment.</em> Hal ini mengindikasikan bahwa pengembangan produk ini diterima oleh siswa terutama pada sekolah menengah kejuruan. </p>


2021 ◽  
Author(s):  
Nicoló Cesana-Arlotti

What are the developmental foundations of logical thought? Here we find that 2.5-year-old toddlers (N=36) can reason using a Disjunctive Inference (i.e., A OR B, NOT A, THEREFORE B) across three contexts, which argues that domain-general logical reasoning may be in place from as early as the third year of life.


2021 ◽  
Author(s):  
Olivia Guest ◽  
Andrea E. Martin

In the cognitive, computational, and neuro- sciences, we often reason about what models (viz., formal and/or computational) represent, learn, or "know", as well as what algorithm they instantiate. The putative goal of such reasoning is to generalize claims about the model in question to claims about the mind and brain. This reasoning process typically presents as inference about the representations, processes, or algorithms the human mind and brain instantiate. Such inference is often based on a model's performance on a task, and whether that performance approximates human behaviour or brain activity. The model in question is often an artificial neural network (ANN) model, though the problems we discuss are generalizable to all reasoning over models. Arguments typically take the form "the brain does what the ANN does because the ANN reproduced the pattern seen in brain activity" or "cognition works this way because the ANN learned to approximate task performance." Then, the argument concludes that models achieve this outcome by doing what people do or having the capacities people have. At first blush, this might appear as a form of modus ponens, a valid deductive logical inference rule. However, as we explain in this article, this is not the case, and thus, this form of argument eventually results in affirming the consequent – a logical or inferential fallacy. We discuss what this means broadly for research in cognitive science, neuroscience, and psychology; what it means for models when they lose the ability to mediate between theory and data in a meaningful way; and what this means for the logic, the metatheoretical calculus, our fields deploy in high-level scientific inference.


Informatics ◽  
2021 ◽  
Vol 18 (3) ◽  
pp. 97-105
Author(s):  
A. М. Sobol ◽  
E. I. Kozlova ◽  
Yu. A. Chernyavsky

There are three main families of inference algorithms in first-order logic: direct inference and its application to deductive databases and production systems; backward inference procedures and logic programming systems; theorem proving systems based on the resolution method. When solving specific problems, the most effective algorithms are those that allow you to cover all the facts and axioms and must be taken into account in the process of inference. An example is considered in which it is necessary to prove the guilt of a person in murder. On the basis of statements, a knowledge base is formed from expressions, with the help of which an expression of first-order logic is compiled and proved using direct logical inference. The proof of the reasoning obtained in direct inference using the proof tree is given. However, direct inference provides for the implementation of all admissible stages of logical inference based on all known facts. The article also considers a method based on the resolution when implementing the reverse inference, taking into account the expression obtained in the direct inference. This expression is converted into a conjunctive normal formula using the laws of Boolean algebra and is proved by the elimination of events using the conjunction operation.


2021 ◽  
Author(s):  
Marjolein Deryck ◽  
Nuno Comenda ◽  
Bart Coppens ◽  
Joost Vennekens

This paper presents an application that we developed to assist users with the creation of an investment profile for the selection of financial assets. It consists of a natural language interface, an automatic translation to a declarative FO(.) knowledge base, and the IDP reasoning engine with multiple forms of logical inference. The application speeds up the investment profile creation process, and reduces the considerable inherent operational risk linked to the creation of investment profiles


2021 ◽  
Author(s):  
Cheng Yuanyuan

Abstract:Purpose: To study the effect of the application of the dimensionality reduction in logical judgment (or logical reasoning, logical inference) programs. Methods: Use enumeration and dimensionality reduction methods to solve logical judgment problems.The effect of the two methods is illustrated in the form of a case study. Results: For logical judgmentproblems, using enumeration method to find the best answer is a comprehensive and fundamental method, but the disadvantage is that it is computationally intensive and computationally inefficient. Compared with the ideas of parallel treatment of known conditions by enumeration method, the application of dimensionality reduction thinking was built on the basis of fully mining information for feature extraction and feature selection. Conclusions: The dimensionality reduction method was applied to the logical judgment problems, and on the basis of fully mining information, the dimensionality reduction principle of statistics were applied to stratify and merge variables with the same or similar characteristics to achieve the purpose of streamlining variables, simplifying logical judgment steps, reducing computation and improving algorithm efficiency.


Sign in / Sign up

Export Citation Format

Share Document