scholarly journals Reliability and Accuracy of Alternative Default Prediction Models: Evidence from Slovakia

2021 ◽  
Vol 9 (4) ◽  
pp. 65
Author(s):  
Daniela Rybárová ◽  
Helena Majdúchová ◽  
Peter Štetka ◽  
Darina Luščíková

The aim of this paper is to assess the reliability of alternative default prediction models in local conditions, with subsequent comparison with other generally known and globally disseminated default prediction models, such as Altman’s Z-score, Quick Test, Creditworthiness Index, and Taffler’s Model. The comparison was carried out on a sample of 90 companies operating in the Slovak Republic over a period of 3 years (2016, 2017, and 2018) with a narrower focus on three sectors: construction, retail, and tourism, using alternative default prediction models, such as CH-index, G-index, Binkert’s Model, HGN2 Model, M-model, Gulka’s Model, Hurtošová’s Model, Model of Delina and Packová, and Binkert’s Model. To verify the reliability of these models, tests of the significance of statistical hypotheses were used, such as type I and type II error. According to research results, the highest reliability and accuracy was achieved by an alternative local Model of Delina and Packová. The least reliable results within the list of models were reported by the most globally disseminated model, Altman’s Z-score. Significant differences between sectors were identified.

1996 ◽  
Vol 26 (2) ◽  
pp. 149-160 ◽  
Author(s):  
J. K. Belknap ◽  
S. R. Mitchell ◽  
L. A. O'Toole ◽  
M. L. Helms ◽  
J. C. Crabbe

2005 ◽  
Vol 7 (1) ◽  
pp. 41 ◽  
Author(s):  
Mohamad Iwan

This research examines financial ratios that distinguish between bankrupt and non-bankrupt companies and make use of those distinguishing ratios to build a one-year prior to bankruptcy prediction model. This research also calculates how many times the type I error is more costly compared to the type II error. The costs of type I and type II errors (cost of misclassification errors) in conjunction to the calculation of prior probabilities of bankruptcy and non-bankruptcy are used in the calculation of the ZETAc optimal cut-off score. The bankruptcy prediction result using ZETAc optimal cut-off score is compared to the bankruptcy prediction result using a cut-off score which does not consider neither cost of classification errors nor prior probabilities as stated by Hair et al. (1998), and for later purposes will be referred to Hair et al. optimum cutting score. Comparison between the prediction results of both cut-off scores is purported to determine the better cut-off score between the two, so that the prediction result is more conservative and minimizes expected costs, which may occur from classification errors.  This is the first research in Indonesia that incorporates type I and II errors and prior probabilities of bankruptcy and non-bankruptcy in the computation of the cut-off score used in performing bankruptcy prediction. Earlier researches gave the same weight between type I and II errors and prior probabilities of bankruptcy and non-bankruptcy, while this research gives a greater weigh on type I error than that on type II error and prior probability of non-bankruptcy than that on prior probability of bankruptcy.This research has successfully attained the following results: (1) type I error is in fact 59,83 times more costly compared to type II error, (2) 22 ratios distinguish between bankrupt and non-bankrupt groups, (3) 2 financial ratios proved to be effective in predicting bankruptcy, (4) prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5) Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.


Author(s):  
Aniek Sies ◽  
Iven Van Mechelen

AbstractWhen multiple treatment alternatives are available for a certain psychological or medical problem, an important challenge is to find an optimal treatment regime, which specifies for each patient the most effective treatment alternative given his or her pattern of pretreatment characteristics. The focus of this paper is on tree-based treatment regimes, which link an optimal treatment alternative to each leaf of a tree; as such they provide an insightful representation of the decision structure underlying the regime. This paper compares the absolute and relative performance of four methods for estimating regimes of that sort (viz., Interaction Trees, Model-based Recursive Partitioning, an approach developed by Zhang et al. and Qualitative Interaction Trees) in an extensive simulation study. The evaluation criteria were, on the one hand, the expected outcome if the entire population would be subjected to the treatment regime resulting from each method under study and the proportion of clients assigned to the truly best treatment alternative, and, on the other hand, the Type I and Type II error probabilities of each method. The method of Zhang et al. was superior regarding the first two outcome measures and the Type II error probabilities, but performed worst in some conditions of the simulation study regarding Type I error probabilities.


Methodology ◽  
2010 ◽  
Vol 6 (4) ◽  
pp. 147-151 ◽  
Author(s):  
Emanuel Schmider ◽  
Matthias Ziegler ◽  
Erik Danay ◽  
Luzi Beyer ◽  
Markus Bühner

Empirical evidence to the robustness of the analysis of variance (ANOVA) concerning violation of the normality assumption is presented by means of Monte Carlo methods. High-quality samples underlying normally, rectangularly, and exponentially distributed basic populations are created by drawing samples which consist of random numbers from respective generators, checking their goodness of fit, and allowing only the best 10% to take part in the investigation. A one-way fixed-effect design with three groups of 25 values each is chosen. Effect-sizes are implemented in the samples and varied over a broad range. Comparing the outcomes of the ANOVA calculations for the different types of distributions, gives reason to regard the ANOVA as robust. Both, the empirical type I error α and the empirical type II error β remain constant under violation. Moreover, regression analysis identifies the factor “type of distribution” as not significant in explanation of the ANOVA results.


1996 ◽  
Vol 1 (1) ◽  
pp. 25-28 ◽  
Author(s):  
Martin A. Weinstock

Background: Accurate understanding of certain basic statistical terms and principles is key to critical appraisal of published literature. Objective: This review describes type I error, type II error, null hypothesis, p value, statistical significance, a, two-tailed and one-tailed tests, effect size, alternate hypothesis, statistical power, β, publication bias, confidence interval, standard error, and standard deviation, while including examples from reports of dermatologic studies. Conclusion: The application of the results of published studies to individual patients should be informed by an understanding of certain basic statistical concepts.


2013 ◽  
Vol 31 (15_suppl) ◽  
pp. 4036-4036 ◽  
Author(s):  
Daniel M. Halperin ◽  
J. Jack Lee ◽  
James C. Yao

4036 Background: Few new therapies for pancreatic adenocarcinoma (PC) have been approved by the Food and Drug Administration (FDA) or recommended by the National Comprehensive Cancer Network (NCCN), reflecting frequent failures in phase III trials. We hypothesize that the high failure rate in large trials is due to a low predictive value for “positive” phase II studies. Methods: Given a median time from initiation of clinical trials to FDA approval of 6.3 years, we conducted a systematic search of the clinicaltrials.gov database for phase II interventional trials of antineoplastic therapy in PC initiated from 1999-2004. We reviewed drug labels and NCCN guidelines for FDA approval and guideline recommendations. Results: We identified 70 phase II trials that met our inclusion criteria. Forty-five evaluated compounds without preexisting FDA approval, 23 evaluated drugs approved in other diseases, and 2 evaluated cellular therapies. With a median follow-up of 12.5 years, none of these drugs gained FDA approval in PC. Four trials, all combining chemotherapy with radiation, eventually resulted in NCCN recommendations. Forty-two of the trials have been published. Of 16 studies providing pre-specified type I error rates, these rates were ≥0.1 in 8 studies, 0.05 in 6 studies and <0.025 in 2 studies. Of 21 studies specifying type II error rates, 7 used >0.1, 10 used 0.1, and 4 used <0.1. Published studies reported a median enrollment of 47 subjects. Fourteen trials reported utilizing a randomized design. Conclusions: The low rate of phase II trials resulting in eventual regulatory approval of therapies for PC reflects the challenge of conquering a tough disease as well as deficiencies in the statistical designs. New strategies are necessary to quantify and improve odds of success in drug development. Statistical parameters of individual or coupled phase II trials should be tailored to achieve the desired predictive value prior to initiating pivotal phase III studies. Positive predictive value of a phase II study assuming a 1%, 2%, or 5% prior probability of success and 10% type II error rate. [Table: see text]


Sign in / Sign up

Export Citation Format

Share Document