Auditor Size and Going Concern Reporting

2018 ◽  
Vol 37 (2) ◽  
pp. 1-25 ◽  
Author(s):  
Nathan R. Berglund ◽  
John Daniel Eshleman ◽  
Peng Guo

SUMMARY Auditing theory predicts that larger auditors will be more likely to issue a going concern opinion to a distressed client. However, the existing empirical evidence on this issue is mixed. We attribute these mixed results to a failure to adequately control for clients' financial health. We demonstrate how properly controlling for clients' financial health reveals a positive relationship between auditor size and the propensity to issue a going concern opinion. We corroborate our findings by replicating a related study and showing how the results change when financial health variables are added to the model. In supplemental analysis, we find that Big 4 auditors are more likely than mid-tier auditors (Grant Thornton and BDO Seidman) to issue going concern opinions to distressed clients. We also find that, compared to other auditors, the Big 4 are less likely to issue false-positive (Type I error) going concern opinions. We find no evidence that the Big 4 are more or less likely to fail to issue a going concern opinion to a client that eventually files for bankruptcy (Type II error). Our results are robust to the use of a variety of matching techniques. JEL Classifications: M41; M42.

2020 ◽  
Vol 39 (3) ◽  
pp. 185-208
Author(s):  
Qiao Xu ◽  
Rachana Kalelkar

SUMMARY This paper examines whether inaccurate going-concern opinions negatively affect the audit office's reputation. Assuming that clients perceive the incidence of going-concern opinion errors as a systematic audit quality concern within the entire audit office, we expect these inaccuracies to impact the audit office market share and dismissal rate. We find that going-concern opinion inaccuracy is negatively associated with the audit office market share and is positively associated with the audit office dismissal rate. Furthermore, we find that the decline in market share and the increase in dismissal rate are primarily associated with Type I errors. Additional analyses reveal that the negative consequence of going-concern opinion inaccuracy is lower for Big 4 audit offices. Finally, we find that the decrease in the audit office market share is explained by the distressed clients' reactions to Type I errors and audit offices' lack of ability to attract new clients.


1999 ◽  
Vol 18 (1) ◽  
pp. 37-54 ◽  
Author(s):  
Andrew J. Rosman ◽  
Inshik Seol ◽  
Stanley F. Biggs

The effect of different task settings within an industry on auditor behavior is examined for the going-concern task. Using an interactive computer process-tracing method, experienced auditors from four Big 6 accounting firms examined cases based on real data that differed on two dimensions of task settings: stage of organizational development (start-up and mature) and financial health (bankrupt and nonbankrupt). Auditors made judgments about each entity's ability to continue as a going concern and, if they had substantial doubt about continued existence, they listed evidence they would seek as mitigating factors. There are seven principal results. First, information acquisition and, by inference, problem representations were sensitive to differences in task settings. Second, financial mitigating factors dominated nonfinancial mitigating factors in both start-up and mature settings. Third, auditors' behavior reflected configural processing. Fourth, categorizing information into financial and nonfinancial dimensions was critical to understanding how auditors' information acquisition and, by inference, problem representations differed across settings. Fifth, Type I errors (determining that a healthy company is a going-concern problem) differed from correct judgments in terms of information acquisition, although Type II errors (determining that a problem company is viable) did not. This may indicate that Type II errors are primarily due to deficiencies in other stages of processing, such as evaluation. Sixth, auditors who were more accurate tended to follow flexible strategies for financial information acquisition. Finally, accurate performance in the going-concern task was found to be related to acquiring (1) fewer information cues, (2) proportionately more liquidity information and (3) nonfinancial information earlier in the process.


2005 ◽  
Vol 7 (1) ◽  
pp. 41 ◽  
Author(s):  
Mohamad Iwan

This research examines financial ratios that distinguish between bankrupt and non-bankrupt companies and make use of those distinguishing ratios to build a one-year prior to bankruptcy prediction model. This research also calculates how many times the type I error is more costly compared to the type II error. The costs of type I and type II errors (cost of misclassification errors) in conjunction to the calculation of prior probabilities of bankruptcy and non-bankruptcy are used in the calculation of the ZETAc optimal cut-off score. The bankruptcy prediction result using ZETAc optimal cut-off score is compared to the bankruptcy prediction result using a cut-off score which does not consider neither cost of classification errors nor prior probabilities as stated by Hair et al. (1998), and for later purposes will be referred to Hair et al. optimum cutting score. Comparison between the prediction results of both cut-off scores is purported to determine the better cut-off score between the two, so that the prediction result is more conservative and minimizes expected costs, which may occur from classification errors.  This is the first research in Indonesia that incorporates type I and II errors and prior probabilities of bankruptcy and non-bankruptcy in the computation of the cut-off score used in performing bankruptcy prediction. Earlier researches gave the same weight between type I and II errors and prior probabilities of bankruptcy and non-bankruptcy, while this research gives a greater weigh on type I error than that on type II error and prior probability of non-bankruptcy than that on prior probability of bankruptcy.This research has successfully attained the following results: (1) type I error is in fact 59,83 times more costly compared to type II error, (2) 22 ratios distinguish between bankrupt and non-bankrupt groups, (3) 2 financial ratios proved to be effective in predicting bankruptcy, (4) prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5) Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.


Methodology ◽  
2010 ◽  
Vol 6 (4) ◽  
pp. 147-151 ◽  
Author(s):  
Emanuel Schmider ◽  
Matthias Ziegler ◽  
Erik Danay ◽  
Luzi Beyer ◽  
Markus Bühner

Empirical evidence to the robustness of the analysis of variance (ANOVA) concerning violation of the normality assumption is presented by means of Monte Carlo methods. High-quality samples underlying normally, rectangularly, and exponentially distributed basic populations are created by drawing samples which consist of random numbers from respective generators, checking their goodness of fit, and allowing only the best 10% to take part in the investigation. A one-way fixed-effect design with three groups of 25 values each is chosen. Effect-sizes are implemented in the samples and varied over a broad range. Comparing the outcomes of the ANOVA calculations for the different types of distributions, gives reason to regard the ANOVA as robust. Both, the empirical type I error α and the empirical type II error β remain constant under violation. Moreover, regression analysis identifies the factor “type of distribution” as not significant in explanation of the ANOVA results.


1996 ◽  
Vol 1 (1) ◽  
pp. 25-28 ◽  
Author(s):  
Martin A. Weinstock

Background: Accurate understanding of certain basic statistical terms and principles is key to critical appraisal of published literature. Objective: This review describes type I error, type II error, null hypothesis, p value, statistical significance, a, two-tailed and one-tailed tests, effect size, alternate hypothesis, statistical power, β, publication bias, confidence interval, standard error, and standard deviation, while including examples from reports of dermatologic studies. Conclusion: The application of the results of published studies to individual patients should be informed by an understanding of certain basic statistical concepts.


1997 ◽  
Vol 07 (05) ◽  
pp. 433-440 ◽  
Author(s):  
Woo Kyu Lee ◽  
Jae Ho Chung

In this paper, a fingerprint recognition algorithm is suggested. The algorithm is developed based on the wavelet transform, and the dominant local orientation which is derived from the coherence and the gradient of Gaussian. By using the wavelet transform, the algorithm does not require conventional preprocessing procedures such as smoothing, binarization, thining and restoration. Computer simulation results show that when the rate of Type II error — Incorrect recognition of two different fingerprints as identical fingerprints — is held at 0.0%, the rate of Type I error — Incorrect recognition of two identical fingerprints as different ones — turns out as 2.5% in real time.


1994 ◽  
Vol 19 (2) ◽  
pp. 91-101 ◽  
Author(s):  
Ralph A. Alexander ◽  
Diane M. Govern

A new approximation is proposed for testing the equality of k independent means in the face of heterogeneity of variance. Monte Carlo simulations show that the new procedure has Type I error rates that are very nearly nominal and Type II error rates that are quite close to those produced by James’s (1951) second-order approximation. In addition, it is computationally the simplest approximation yet to appear, and it is easily applied to Scheffé (1959) -type multiple contrasts and to the calculation of approximate tail probabilities.


2020 ◽  
pp. 455
Author(s):  
Daniel Walters

Recent years have seen the rise of pointed and influential critiques of deference doctrines in administrative law. What many of these critiques have in common is a view that judges, not agencies, should resolve interpretive disputes over the meaning of statutes—disputes the critics take to be purely legal and almost always resolvable using lawyerly tools of statutory construction. In this Article, I take these critiques, and the relatively formalist assumptions behind them, seriously and show that the critics have not acknowledged or advocated the full reform vision implied by their theoretical premises. Specifically, critics have extended their critique of judicial abdication only to what I call Type I statutory errors (that is, agency interpretations that regulate more conduct than the best reading of the statute would allow the agency to regulate) and do not appear to accept or anticipate that their theory of interpretation would also extend to what I call Type II statutory errors (that is, agency failures to regulate as much conduct as the best reading of the statute would require). As a consequence, critics have been more than willing to entertain an end to Chevron deference, an administrative law doctrine that is mostly invoked to justify Type I error, but have not shown any interest in adjusting administrative law doctrine to remedy agencies’ commission of Type II error. The result is a vision of administrative law’s future that is precariously slanted against legislative and regulatory action. I critique this asymmetry in administrative law and address potential justifications of systemic asymmetries in the doctrine, such as concern about the remedial implications of addressing Type II error, finding them all wanting from a legal and theoretical perspective. I also lay out the positive case for adhering to symmetry in administrative law doctrine. In a time of deep political conflict over regulation and administration, symmetry plays, or at the very least could play, an important role in depoliticizing administrative law, clarifying what is at stake in debates about the proper level of deference to agency legal interpretations, and disciplining partisan gamesmanship. I suggest that when the conversation is so disciplined, an administrative law without deference to both Type I and Type II error is hard to imagine due to the high judicial costs of minimizing Type II error, but if we collectively choose to discard deference notwithstanding these costs, it would be a more sustainable political choice for administrative law than embracing the current, one-sided critique of deference.


Author(s):  
S. M. Ayazi ◽  
M. Saadat Seresht

Abstract. Today, a variety of methods have been proposed by researchers to distinguish ground and non-ground points in point cloud data. Most fully automated methods have a common disadvantage which is the lack of proper algorithm response for all areas and levels of the ground, so most of these algorithms have good outcomes in simple landscapes but encounter problems in complex landscapes. Point cloud filtering techniques can be divided into two general rule-based and novel methods. Today, the use of machine learning techniques has improved the results of classification, which has led to significant results, especially when data can be labelled at the presence of training data. In this paper, firstly, altimeter and radiometric features are extracted from the LiDAR data and the point cloud derived from digital photogrammetry. Then, these features are participated in a classification process using SVM learning and random forest methods, and the ground and Non-ground points are classified. The classification results using this method on LiDAR data show a total error of 6.2%, a type I error of 5.4%, and a type II error of 13.2%. The comparison of the proposed method with the results of LASTools software shows a reduction in total error and type I error (while increasing the type II error). This method was also investigated on the dense point cloud obtained from digital photogrammetry and based on this study, the total was 7.2%, the type I error was 6.8%, and the type II error was 10.9%.


Sign in / Sign up

Export Citation Format

Share Document