scholarly journals Predicting financial distress of companies listed on the JSE: A comparison of techniques

2009 ◽  
Vol 40 (1) ◽  
pp. 21-32 ◽  
Author(s):  
G. H. Muller ◽  
B. W. Steyn-Bruwer ◽  
W. D. Hamman

In 2006, Steyn-Bruwer and Hamman highlighted several deficiencies in previous research which investigated the prediction of corporate failure (or financial distress) of companies. In their research, Steyn-Bruwer and Hamman made use of the population of companies for the period under review and not only a sample of bankrupt versus successful companies. Here the sample of bankrupt versus successful companies is considered as two extremes on the continuum of financial condition, while the population is considered as the entire continuum of financial condition.The main objective of this research, which was based on the above-mentioned authors’ work, was to test whether some modelling techniques would in fact provide better prediction accuracies than other modelling techniques. The different modelling techniques considered were: Multiple discriminant analysis (MDA), Recursive partitioning (RP), Logit analysis (LA) and Neural networks (NN).From the literature survey it was evident that existing literature did not readily consider the number of Type I and Type II errors made. As such, this study introduces a novel concept (not seen in other research) called the “Normalised Cost of Failure” (NCF) which takes cognisance of the fact that a Type I error typically costs 20 to 38 times that of a Type II error.In terms of the main research objective, the results show that different analysis techniques definitely produce different predictive accuracies. Here, the MDA and RP techniques correctly predict the most “failed” companies, and consequently have the lowest NCF; while the LA and NN techniques provide the best overall predictive accuracy.

2005 ◽  
Vol 7 (1) ◽  
pp. 41 ◽  
Author(s):  
Mohamad Iwan

This research examines financial ratios that distinguish between bankrupt and non-bankrupt companies and make use of those distinguishing ratios to build a one-year prior to bankruptcy prediction model. This research also calculates how many times the type I error is more costly compared to the type II error. The costs of type I and type II errors (cost of misclassification errors) in conjunction to the calculation of prior probabilities of bankruptcy and non-bankruptcy are used in the calculation of the ZETAc optimal cut-off score. The bankruptcy prediction result using ZETAc optimal cut-off score is compared to the bankruptcy prediction result using a cut-off score which does not consider neither cost of classification errors nor prior probabilities as stated by Hair et al. (1998), and for later purposes will be referred to Hair et al. optimum cutting score. Comparison between the prediction results of both cut-off scores is purported to determine the better cut-off score between the two, so that the prediction result is more conservative and minimizes expected costs, which may occur from classification errors.  This is the first research in Indonesia that incorporates type I and II errors and prior probabilities of bankruptcy and non-bankruptcy in the computation of the cut-off score used in performing bankruptcy prediction. Earlier researches gave the same weight between type I and II errors and prior probabilities of bankruptcy and non-bankruptcy, while this research gives a greater weigh on type I error than that on type II error and prior probability of non-bankruptcy than that on prior probability of bankruptcy.This research has successfully attained the following results: (1) type I error is in fact 59,83 times more costly compared to type II error, (2) 22 ratios distinguish between bankrupt and non-bankrupt groups, (3) 2 financial ratios proved to be effective in predicting bankruptcy, (4) prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5) Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.


Methodology ◽  
2010 ◽  
Vol 6 (4) ◽  
pp. 147-151 ◽  
Author(s):  
Emanuel Schmider ◽  
Matthias Ziegler ◽  
Erik Danay ◽  
Luzi Beyer ◽  
Markus Bühner

Empirical evidence to the robustness of the analysis of variance (ANOVA) concerning violation of the normality assumption is presented by means of Monte Carlo methods. High-quality samples underlying normally, rectangularly, and exponentially distributed basic populations are created by drawing samples which consist of random numbers from respective generators, checking their goodness of fit, and allowing only the best 10% to take part in the investigation. A one-way fixed-effect design with three groups of 25 values each is chosen. Effect-sizes are implemented in the samples and varied over a broad range. Comparing the outcomes of the ANOVA calculations for the different types of distributions, gives reason to regard the ANOVA as robust. Both, the empirical type I error α and the empirical type II error β remain constant under violation. Moreover, regression analysis identifies the factor “type of distribution” as not significant in explanation of the ANOVA results.


1996 ◽  
Vol 1 (1) ◽  
pp. 25-28 ◽  
Author(s):  
Martin A. Weinstock

Background: Accurate understanding of certain basic statistical terms and principles is key to critical appraisal of published literature. Objective: This review describes type I error, type II error, null hypothesis, p value, statistical significance, a, two-tailed and one-tailed tests, effect size, alternate hypothesis, statistical power, β, publication bias, confidence interval, standard error, and standard deviation, while including examples from reports of dermatologic studies. Conclusion: The application of the results of published studies to individual patients should be informed by an understanding of certain basic statistical concepts.


1984 ◽  
Vol 11 (1) ◽  
pp. 11-18 ◽  
Author(s):  
W. Ted Hinds

Ecological monitoring is the purposeful observation, over time, of ecological processes in relation to stress. It differs from biological monitoring in that ecological monitoring does not consider the biota to be a surrogate filter to be analysed for contaminants, but rather has changes in the biotic processes as its focal point for observation of response to stress. Ecological monitoring methods aimed at detecting subtle or slow changes in ecological structure or function usually cannot be based on simple repetition of an arbitrarily chosen field measurement. An optimum method should be deliberately designed to be ecologically appropriate, statistically credible, and cost-efficient.Ecologically appropriate methods should consider the ecological processes that are most likely to respond to the stress of concern, so that relatively simple and well-defined measurements can be used. Statistical credibility requires that both Type I and Type II errors be addressed; Type I error (a false declaration of impact when none exists) and Type II error (a false declaration that no change has taken place or that an observed change is random) are about equally important in a monitoring context. Therefore, these error rates should probably be equal. Furthermore, the error rates should reflect the large inherent variability in undomesticated situations; the optimum may be 10%, rather than the traditional 5% or 1% for controlled experiments and observations.


1997 ◽  
Vol 07 (05) ◽  
pp. 433-440 ◽  
Author(s):  
Woo Kyu Lee ◽  
Jae Ho Chung

In this paper, a fingerprint recognition algorithm is suggested. The algorithm is developed based on the wavelet transform, and the dominant local orientation which is derived from the coherence and the gradient of Gaussian. By using the wavelet transform, the algorithm does not require conventional preprocessing procedures such as smoothing, binarization, thining and restoration. Computer simulation results show that when the rate of Type II error — Incorrect recognition of two different fingerprints as identical fingerprints — is held at 0.0%, the rate of Type I error — Incorrect recognition of two identical fingerprints as different ones — turns out as 2.5% in real time.


1994 ◽  
Vol 19 (2) ◽  
pp. 91-101 ◽  
Author(s):  
Ralph A. Alexander ◽  
Diane M. Govern

A new approximation is proposed for testing the equality of k independent means in the face of heterogeneity of variance. Monte Carlo simulations show that the new procedure has Type I error rates that are very nearly nominal and Type II error rates that are quite close to those produced by James’s (1951) second-order approximation. In addition, it is computationally the simplest approximation yet to appear, and it is easily applied to Scheffé (1959) -type multiple contrasts and to the calculation of approximate tail probabilities.


2013 ◽  
Vol 27 (4) ◽  
pp. 693-710 ◽  
Author(s):  
Adrian Valencia ◽  
Thomas J. Smith ◽  
James Ang

SYNOPSIS Fair value accounting has been a hotly debated topic during the recent financial crisis. Supporters argue that fair values are more relevant to investors, while detractors point to the measurement error in the estimation of the reported fair values to attack its reliability. This study examines how noise in reported fair values impacts bank capital adequacy ratios. If measurement error causes reported capital levels to deviate from fundamental levels, then regulators could misidentify a financially healthy bank as troubled (type I error) or a financially troubled bank as safe (type II error), leading to suboptimal resource allocations for banks, regulators, and investors. We use a Monte Carlo simulation to generate our data, and find that while noise leads to both type I and type II errors around key Federal Deposit Insurance Corporation (FDIC) capital adequacy benchmarks, the type I error dominates. Specifically, noise is associated with 2.58 (2.60) [1.092], 5.67 (6.44) [1.94], and 10.60 (26.83) [3.423] times more type I errors than type II errors around the Tier 1 (Total) [Leverage] well-capitalized, adequately capitalized, and significantly undercapitalized benchmarks, respectively. Economically, our results suggest that noise can lead to inefficient allocation of resources on the part of regulators (increased monitoring costs) and banks (increased compliance costs). JEL Classifications: D52; M41; C15; G21.


Author(s):  
E. M. Farhadzadeh ◽  
A. Z. Muradaliyev ◽  
Yu. Z. Farzaliyev ◽  
T. K. Rafiyeva ◽  
S. A. Abdullayeva

Improving the reliability of decisions taken in the organization of maintenance and repair of electric power systems is one of the most important and difficult problems. It is important because erroneous solutions lead, first of all, to an increase in operating costs. The difficulty in solving this problem is associated with the lack of appropriate methods to reduce the risk of erroneous decisions. The article presents one of the aspects of this problem, i.e. improving the reliability of the decision on the nature of the relationship of technical and economic indicators of electric power systems. Traditionally, increase of reliability of the decision is reached by reduction of a Type I error. Usually it is accepted to be equal to 5%, occasionally – to 1%, and at researches – even to 0.5 %. The corresponding critical values of correlation coefficients are given in mathematics reference books. This method implicitly assumes that the consequences of a Type I error significantly exceed the consequences of Type II errors, and the distribution of correlation coefficients corresponds to the normal law. Therefore, the risk of an erroneous decision concerning the absence of a significant statistical relation is not controlled. But even if there is a wish to estimate the Type II error, it is almost impossible to fulfill it, because there are no critical values for correlation coefficients of dependent samples. No less relevant is the problem of deciding on the statistical relationship between technical and economic indicators in conditions of equality of consequences of erroneous decisions, i.e. it is necessary to take into account both a Type I error and a Type II error. To overcome the mentioned difficulties a new method for estimating the critical values of correlation coefficients has been developed. The novelty consists in the application of fiducial approach; the calculation of critical values are fulfilled with the aid of computer technologies of simulation of possible realizations of the correlation coefficients for the two assumptions, viz. technical and economic indicators of the independent and dependent; simulation is fulfilled with the method of solving the “inverse problem”, which enables the possible implementation of the correlation coefficients for the really dependent and independent samples of random variables at a given sample size; the developed algorithms and programs for calculation made it possible to obtain the critical values of correlation coefficients for independent and dependent samples; in conditions of the sameness of the consequences of erroneous decisions it is proposed to make a decision not based on critical value but based on the boundary values of the correlation coefficients that correspond to the minimum total risk of erroneous decisions; the exemplification of the recommendations application was made on example of technical and economic parameters of boilers of power units of 300 MWt. The significant impact of the availability of interrelated technical and economic indicators on the result of the ranking of boiler plants by the reliability and efficiency of their work is demonstrated.


2020 ◽  
pp. 455
Author(s):  
Daniel Walters

Recent years have seen the rise of pointed and influential critiques of deference doctrines in administrative law. What many of these critiques have in common is a view that judges, not agencies, should resolve interpretive disputes over the meaning of statutes—disputes the critics take to be purely legal and almost always resolvable using lawyerly tools of statutory construction. In this Article, I take these critiques, and the relatively formalist assumptions behind them, seriously and show that the critics have not acknowledged or advocated the full reform vision implied by their theoretical premises. Specifically, critics have extended their critique of judicial abdication only to what I call Type I statutory errors (that is, agency interpretations that regulate more conduct than the best reading of the statute would allow the agency to regulate) and do not appear to accept or anticipate that their theory of interpretation would also extend to what I call Type II statutory errors (that is, agency failures to regulate as much conduct as the best reading of the statute would require). As a consequence, critics have been more than willing to entertain an end to Chevron deference, an administrative law doctrine that is mostly invoked to justify Type I error, but have not shown any interest in adjusting administrative law doctrine to remedy agencies’ commission of Type II error. The result is a vision of administrative law’s future that is precariously slanted against legislative and regulatory action. I critique this asymmetry in administrative law and address potential justifications of systemic asymmetries in the doctrine, such as concern about the remedial implications of addressing Type II error, finding them all wanting from a legal and theoretical perspective. I also lay out the positive case for adhering to symmetry in administrative law doctrine. In a time of deep political conflict over regulation and administration, symmetry plays, or at the very least could play, an important role in depoliticizing administrative law, clarifying what is at stake in debates about the proper level of deference to agency legal interpretations, and disciplining partisan gamesmanship. I suggest that when the conversation is so disciplined, an administrative law without deference to both Type I and Type II error is hard to imagine due to the high judicial costs of minimizing Type II error, but if we collectively choose to discard deference notwithstanding these costs, it would be a more sustainable political choice for administrative law than embracing the current, one-sided critique of deference.


Sign in / Sign up

Export Citation Format

Share Document