Measuring Quality of Decision Rules Through Ranking of Conditional Attributes

Author(s):  
Urszula Stańczyk
Keyword(s):  
2012 ◽  
pp. 163-186
Author(s):  
Jirí Krupka ◽  
Miloslava Kašparová ◽  
Pavel Jirava ◽  
Jan Mandys

The chapter presents the problem of quality of life modeling in the Czech Republic based on classification methods. It concerns a comparison of methodological approaches; in the first case the approach of the Institute of Sociology of the Academy of Sciences of the Czech Republic was used, the second case is concerning a project of the civic association Team Initiative for Local Sustainable Development. On the basis of real data sets from the institute and team initiative the authors synthesized and analyzed quality of life classification models. They used decision tree classification algorithms for generating transparent decision rules and compare the classification results of decision tree. The classifier models on the basis of C5.0, CHAID, C&RT and C5.0 boosting algorithms were proposed and analyzed. The designed classification model was created in Clementine.


2020 ◽  
Vol 12 (9) ◽  
pp. 3780
Author(s):  
Karmen Pažek ◽  
Jernej Prišenk ◽  
Simon Bukovski ◽  
Boris Prevolšek ◽  
Črtomir Rozman

In this paper, the quality of the municipal waste sorting process in seven waste management centers in Slovenia was assessed using the qualitative multicriteria analysis (MCA) method DEX (Decision EXpert) implemented in DEXi software, which is based on multicriteria decomposition of the problem and utility functions in the form of “if–then” decision rules. The study was based on eight types of secondary raw materials. The quality of the secondary raw materials, the regularity of the delivery of secondary raw materials to recycling units based on the sorting efficiency, and the loading weight of the individual baled fractions in the transport of secondary raw materials for recycling were the main parameters used in the model. The final assessment shows “good” waste management service in centers A and D. Centers B, C, and F were rated “average”. The “bad” rating was assigned to centers E and G.


1994 ◽  
Vol 16 (2) ◽  
pp. 298 ◽  
Author(s):  
DG Wilcox ◽  
DG Burnside

The path of change in land administration practices from that which had the exploitation of pastoral resources by domestic stock as its principal objective to a position where administration is required to take a more holistic view of the management of rangelands for a wide range of uses is discussed in this paper. Although historically administration has been generally slow to react to changing operating environments, a varying degree of legislative and behavioural changes have occurred in response to a wide range of influences. These influences include: objective information on rangeland resources; complementary legislation affecting the use of these resources; new Government programs directed at improving land management; a developing awareness of the value of rangeland for purposes other than grazing domestic animals; and the economic difficulties facing the grazing industries. With major changes and uncertainties surrounding rangeland use, we suggest that administrators themselves must define their objectives clearly in terms of the needs of all land users, within a framework of sustainable land use. This work can best be done within new networks and partnerships involving the relevant agencies and groups. By defining acceptable criteria and decision rules within these structures, administrators can focus more on the quality of the process in land administration and measuring their performance, rather than regulating for their defined desirable outcome. Finally, we recognise that the evaluation of administrative performance is an area that requires urgent attention.


Author(s):  
Michael R. Kosorok ◽  
Eric B. Laber

Precision medicine seeks to maximize the quality of health care by individualizing the health-care process to the uniquely evolving health status of each patient. This endeavor spans a broad range of scientific areas including drug discovery, genetics/genomics, health communication, and causal inference, all in support of evidence-based, i.e., data-driven, decision making. Precision medicine is formalized as a treatment regime that comprises a sequence of decision rules, one per decision point, which map up-to-date patient information to a recommended action. The potential actions could be the selection of which drug to use, the selection of dose, the timing of administration, the recommendation of a specific diet or exercise, or other aspects of treatment or care. Statistics research in precision medicine is broadly focused on methodological development for estimation of and inference for treatment regimes that maximize some cumulative clinical outcome. In this review, we provide an overview of this vibrant area of research and present important and emerging challenges.


2017 ◽  
Vol 25 (5) ◽  
pp. 507-514 ◽  
Author(s):  
Sowmya Varada ◽  
Ronilda Lacson ◽  
Ali S Raja ◽  
Ivan K Ip ◽  
Louise Schneider ◽  
...  

Abstract Objective To describe types of recommendations represented in a curated online evidence library, report on the quality of evidence-based recommendations pertaining to diagnostic imaging exams, and assess underlying knowledge representation. Materials and Methods The evidence library is populated with clinical decision rules, professional society guidelines, and locally developed best practice guidelines. Individual recommendations were graded based on a standard methodology and compared using chi-square test. Strength of evidence ranged from grade 1 (systematic review) through grade 5 (recommendations based on expert opinion). Finally, variations in the underlying representation of these recommendations were identified. Results The library contains 546 individual imaging-related recommendations. Only 15% (16/106) of recommendations from clinical decision rules were grade 5 vs 83% (526/636) from professional society practice guidelines and local best practice guidelines that cited grade 5 studies (P < .0001). Minor head trauma, pulmonary embolism, and appendicitis were topic areas supported by the highest quality of evidence. Three main variations in underlying representations of recommendations were “single-decision,” “branching,” and “score-based.” Discussion Most recommendations were grade 5, largely because studies to test and validate many recommendations were absent. Recommendation types vary in amount and complexity and, accordingly, the structure and syntax of statements they generate. However, they can be represented in single-decision, branching, and score-based representations. Conclusion In a curated evidence library with graded imaging-based recommendations, evidence quality varied widely, with decision rules providing the highest-quality recommendations. The library may be helpful in highlighting evidence gaps, comparing recommendations from varied sources on similar clinical topics, and prioritizing imaging recommendations to inform clinical decision support implementation.


Author(s):  
В.А. Махров ◽  
А.В. Найденов

Рассмотрена задача обнаружения радиолокационных дискретных составных частотных сигналов широкополосными приемниками с программными обнаружителями. Данный тип сигналов нашел большое применение в радиолокации благодаря высокой помехоустойчивости и энергетической скрытности. Для их обнаружения часто используют широкополосный энергетический обнаружитель, который измеряет энергию принятого сигнала, сравнивая ее с пороговым уровнем, и на основе этого выносит решение о наличии либо отсутствии сигнала. Недостатком таких устройств является факт срабатывания их на одиночные отчеты, которые могут и не являться полезным сигналом. За счет того, что обнаружение ведется в широкой полосе частот, ухудшаются возможности приема таких сигналов. Для повышения возможностей приема составных частотных сигналов применяются программные обнаружители. За счет определенных правил принятия решения возрастает качество обнаружения, а одиночные отчеты более не воспринимаются. В результате была разработана методика, позволяющая провести оценку вероятности правильного и ложного обнаружения сигналов широкополосным приемником с программным обнаружителем в системах связи с применением широкополосных сигналов на примере радиолокационных дискретных составных частотных сигналов. Показано преимущество применения программной обработки We considered the problem of detecting radar discrete composite frequency signals by broadband receivers with software detectors. This type of signal is widely used in radar due to its high noise immunity and energy secrecy. To detect them, a broadband energy detector is often used, which measures the energy of the received signal, comparing it with a threshold level, and, based on this, makes a decision on the presence or absence of a signal. The disadvantage of such devices is the fact that they are triggered for single reports, which may not be a useful signal. Due to the fact that the detection is carried out in a wide frequency band, the ability to receive such signals is impaired. To improve the reception capabilities of composite frequency signals, software detectors are used. Due to certain decision rules, the quality of detection increases, and single reports are no longer accepted. As a result, we developed a technique that makes it possible to assess the probability of correct and false signal detection by a broadband receiver with a software detector in communication systems using wideband signals using the example of discrete composite frequency radar signals. Here we show the advantage of using software processing


2014 ◽  
Vol 625 ◽  
pp. 26-33 ◽  
Author(s):  
Michael Krystek

Measurement uncertainty has important economic consequences for calibration and inspection activities and is often taken as an indication of the quality of a test laboratory. Smaller uncertainty values are generally of higher value. In industry decision rules employed in accepting and rejecting products are based on the measurement uncertainty budget of the related characteristics of the products. Conformity assessment based on the product specification and the measurement evaluation is an important part of the industrial quality assurance of manufactured products and for the stability of production processes. The aim of this paper is to describe the relationship between the conformance zone and the acceptance zone and to address the problem of determining acceptance limits that define the boundaries of the acceptance zone.


2002 ◽  
Vol 27 (3) ◽  
pp. 255-270 ◽  
Author(s):  
J.R. Lockwood ◽  
Thomas A. Louis ◽  
Daniel F. McCaffrey

Accountability for public education often requires estimating and ranking the quality of individual teachers or schools on the basis of student test scores. Although the properties of estimators of teacher-or-school effects are well established, less is known about the properties of rank estimators. We investigate performance of rank (percentile) estimators in a basic, two-stage hierarchical model capturing the essential features of the more complicated models that are commonly used to estimate effects. We use simulation to study mean squared error (MSE) performance of percentile estimates and to find the operating characteristics of decision rules based on estimated percentiles. Each depends on the signal-to-noise ratio (the ratio of the teacher or school variance component to the variance of the direct, teacher- or school-specific estimator) and only moderately on the number of teachers or schools. Results show that even when using optimal procedures, MSE is large for the commonly encountered variance ratios, with an unrealistically large ratio required for ideal performance. Percentile-specific MSE results reveal interesting interactions between variance ratios and estimators, especially for extreme percentiles, which are of considerable practical import. These interactions are apparent in the performance of decision rules for the identification of extreme percentiles, underscoring the statistical and practical complexity of the multiple goal inferences faced in value-added modeling. Our results highlight the need to assess whether even optimal percentile estimators perform sufficiently well to be used in evaluating teachers or schools.


2018 ◽  
Vol 16 (1/2) ◽  
pp. 29-38 ◽  
Author(s):  
M. Sudha ◽  
A. Kumaravel

Rough set theory is a simple and potential methodology in extracting and minimizing rules from decision tables. Its concepts are core, reduct and discovering knowledge in the form of rules. The decision rules explain the decision state to predict and support the new situation. Initially it was proposed as a useful tool for analysis of decision states. This approach produces a set of decision rules involves two types namely certain and possible rules based on approximation. The prediction may highly be affected if the data size varies in larger numbers. Application of Rough set theory towards this direction has not been considered yet. Hence the main objective of this paper is to study the influence of data size and the number of rules generated by rough set methods. The performance of these methods is presented through the metric like accuracy and quality of classification. The results obtained show the range of performance and first of its kind in current research trend.


Sign in / Sign up

Export Citation Format

Share Document