scholarly journals Decision Rules Derived from Optimal Decision Trees with Hypotheses

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1641
Author(s):  
Mohammad Azad ◽  
Igor Chikalov ◽  
Shahid Hussain ◽  
Mikhail Moshkov ◽  
Beata Zielosko

Conventional decision trees use queries each of which is based on one attribute. In this study, we also examine decision trees that handle additional queries based on hypotheses. This kind of query is similar to the equivalence queries considered in exact learning. Earlier, we designed dynamic programming algorithms for the computation of the minimum depth and the minimum number of internal nodes in decision trees that have hypotheses. Modification of these algorithms considered in the present paper permits us to build decision trees with hypotheses that are optimal relative to the depth or relative to the number of the internal nodes. We compare the length and coverage of decision rules extracted from optimal decision trees with hypotheses and decision rules extracted from optimal conventional decision trees to choose the ones that are preferable as a tool for the representation of information. To this end, we conduct computer experiments on various decision tables from the UCI Machine Learning Repository. In addition, we also consider decision tables for randomly generated Boolean functions. The collected results show that the decision rules derived from decision trees with hypotheses in many cases are better than the rules extracted from conventional decision trees.

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1580
Author(s):  
Mohammad Azad ◽  
Igor Chikalov ◽  
Shahid Hussain ◽  
Mikhail Moshkov

In this paper, we consider decision trees that use two types of queries: queries based on one attribute each and queries based on hypotheses about values of all attributes. Such decision trees are similar to the ones studied in exact learning, where membership and equivalence queries are allowed. We present dynamic programming algorithms for minimization of the depth and number of nodes of above decision trees and discuss results of computer experiments on various data sets and randomly generated Boolean functions. Decision trees with hypotheses generally have less complexity, i.e., they are more understandable and more suitable as a means for knowledge representation.


2019 ◽  
Vol 5 (1) ◽  
pp. 1
Author(s):  
Saha Dauji

Single angle struts are used as compression members for many structures including roof trusses and transmission towers. The exact analysis and design of such members is challenging due to various uncertainties such as the end fixity or eccentricity of the applied loads. The design standards provide guidelines that have been found inaccurate towards the conservative side. Artificial Neural Networks (ANN) have been observed to perform better than the design standards, when trained with experimental data and this has been reported literature. However, practical implementation of ANN poses problem as the trained network as well as the knowhow regarding the application should be accessible to practitioners. In another data-driven tool, the Decision Trees (DT), the practical application is easier as decision based rules are generated, which are readily comprehended and implemented by designers. Hence, in this paper, DT was explored for the evaluation of capacity of eccentrically loaded single angle struts and was found to be robust and yielded comparable accuracy as ANN, and better than design code (AISC). This has enormous potential for easy and straightforward implementation by practicing engineers through the logic based decision rules, which would be easily programmable on computer. For this application, use of dimensionless ratios as inputs for the development of DT was found to yield better results when compared to the approach of using the original variables as inputs.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 808
Author(s):  
Mohammad Azad ◽  
Igor Chikalov ◽  
Shahid Hussain ◽  
Mikhail Moshkov

In this paper, we consider decision trees that use both conventional queries based on one attribute each and queries based on hypotheses of values of all attributes. Such decision trees are similar to those studied in exact learning, where membership and equivalence queries are allowed. We present greedy algorithm based on entropy for the construction of the above decision trees and discuss the results of computer experiments on various data sets and randomly generated Boolean functions.


Author(s):  
Costantino Grana ◽  
Manuela Montangero ◽  
Daniele Borghesani ◽  
Rita Cucchiara

2000 ◽  
Vol 78 (2) ◽  
pp. 320-326 ◽  
Author(s):  
Frank AM Tuyttens

The algebraic relationships, underlying assumptions, and performance of the recently proposed closed-subpopulation method are compared with those of other commonly used methods for estimating the size of animal populations from mark-recapture records. In its basic format the closed-subpopulation method is similar to the Manly-Parr method and less restrictive than the Jolly-Seber method. Computer simulations indicate that the accuracy and precision of the population estimators generated by the basic closed-subpopulation method are almost comparable to those generated by the Jolly-Seber method, and generally better than those of the minimum-number-alive method. The performance of all these methods depends on the capture probability, the number of previous and subsequent trapping occasions, and whether the population is demographically closed or open. Violation of the assumption of equal catchability causes a negative bias that is more pronounced for the closed-subpopulation and Jolly-Seber estimators than for the minimum-number-alive. The closed-subpopulation method provides a simple and flexible framework for illustrating that the precision and accuracy of population-size estimates can be improved by incorporating evidence, other than mark-recapture data, of the presence of recognisable individuals in the population (from radiotelemetry, mortality records, or sightings, for example) and by exploiting specific characteristics of the population concerned.


2006 ◽  
Vol 3 (2) ◽  
pp. 57-72 ◽  
Author(s):  
Kristina Machova ◽  
Miroslav Puszta ◽  
Frantisek Barcak ◽  
Peter Bednar

In this paper we present an improvement of the precision of classification algorithm results. Two various approaches are known: bagging and boosting. This paper describes a set of experiments with bagging and boosting methods. Our use of these methods aims at classification algorithms generating decision trees. Results of performance tests focused on the use of the bagging and boosting methods in connection with binary decision trees are presented. The minimum number of decision trees, which enables an improvement of the classification performed by the bagging and boosting methods, was found. The tests were carried out using the Reuter?s 21578 collection of documents as well as documents from an Internet portal of TV broadcasting company Mark?za. The comparison of our results on testing the bagging and boosting algorithms is presented.


Author(s):  
Rivo Stephano ◽  
Y Yuhandri

The occurrence of bleeding in pregnancy is one of the most complications experienced by pregnant women. The limited knowledge possessed by pregnant women about the risks and dangers of bleeding during pregnancy and wrong or late handling when bleeding occurs is one of the factors that cause bad conditions that occur, namely fetuses and pregnant women can die due to bleeding experienced. This study aims to determine the level of accuracy in diagnosing bleeding that occurs in pregnancy by using the Forward Chaining method precisely and accurately. The data processed in this study were as many as 20 data which came from patient medical records and interviews with experts at RSKIA Sukma Bunda Payakumbuh. The processing stages consist of preparing input data, determining decision tables, creating rules, tracking processes, making decision trees, and tracking results. The results of testing this method are that there are 90% of patients who experience bleeding in pregnancy are based on the results of the consultation entered by the user. The results of this test have been able to diagnose bleeding in pregnancy quickly and accurately using the Forward Chaining method and can be recommended to help the doctor in the emergency room to diagnosing bleeding in pregnancy.


10.37236/1900 ◽  
2005 ◽  
Vol 12 (1) ◽  
Author(s):  
Jakob Jonsson

We consider topological aspects of decision trees on simplicial complexes, concentrating on how to use decision trees as a tool in topological combinatorics. By Robin Forman's discrete Morse theory, the number of evasive faces of a given dimension $i$ with respect to a decision tree on a simplicial complex is greater than or equal to the $i$th reduced Betti number (over any field) of the complex. Under certain favorable circumstances, a simplicial complex admits an "optimal" decision tree such that equality holds for each $i$; we may hence read off the homology directly from the tree. We provide a recursive definition of the class of semi-nonevasive simplicial complexes with this property. A certain generalization turns out to yield the class of semi-collapsible simplicial complexes that admit an optimal discrete Morse function in the analogous sense. In addition, we develop some elementary theory about semi-nonevasive and semi-collapsible complexes. Finally, we provide explicit optimal decision trees for several well-known simplicial complexes.


Sign in / Sign up

Export Citation Format

Share Document