There's No Place Like Home: The Influence of Home-State Going-Concern Reporting Rates on Going-Concern Opinion Propensity and Accuracy

2015 ◽  
Vol 35 (2) ◽  
pp. 23-51 ◽  
Author(s):  
Allen D. Blay ◽  
James R. Moon ◽  
Jeffrey S. Paterson

SUMMARY Prior research has had success identifying client financial characteristics that influence auditors' going-concern reporting decisions. In contrast, relatively little research has addressed whether auditors' circumstances and surroundings influence their propensities to issue modified opinions. We investigate whether auditors' decisions to issue GC opinions are affected by the rate of GC opinions being given in their proximate area. Controlling for factors that prior research associates with going-concern opinions and state-level economics, we find that non-Big 4 auditors located in states with relatively high first-time going-concern rates in the prior year are up to 6 percent more likely to issue first-time going-concern opinions. The results from our state-based GC measure casts doubt that this increased propensity is explained by economic factors and suggests that psychological factors may explain this behavior among auditors. Interestingly, this higher propensity increases auditors' Type I error rates without decreasing their Type II error rates, further suggesting economics alone do not explain these results. Such evidence challenges the generally accepted notion that a higher propensity to issue a going-concern opinion always reflects higher audit quality. JEL Classifications: M41; M42. Data Availability: All data are available from public sources.

2017 ◽  
Vol 36 (3) ◽  
pp. 115-135 ◽  
Author(s):  
Sarowar Hossain ◽  
Kenichi Yazawa ◽  
Gary S. Monroe

SUMMARY Using Japanese data, we investigate whether there is a positive association between audit team composition based on the number of senior auditors, assistant auditors, and other professional staff on the audit team and audit fees and a variety of commonly used measures of audit quality (likelihood of issuing a going concern opinion and a first-time going concern opinion for a sample of financially distressed companies, the absolute value of discretionary and working capital accruals). We find that the number of senior auditors, assistant auditors, and other professional staff on the audit team are positively associated with audit fees. We find that the number of senior auditors on the audit team has a positive association with audit quality. However, the number of assistant auditors and other professional staff on the audit team are not significantly associated with any of our audit quality measures. JEL Classifications: M41; M42. Data Availability: All data are publicly available from the sources indicated in the paper.


2020 ◽  
Vol 39 (3) ◽  
pp. 185-208
Author(s):  
Qiao Xu ◽  
Rachana Kalelkar

SUMMARY This paper examines whether inaccurate going-concern opinions negatively affect the audit office's reputation. Assuming that clients perceive the incidence of going-concern opinion errors as a systematic audit quality concern within the entire audit office, we expect these inaccuracies to impact the audit office market share and dismissal rate. We find that going-concern opinion inaccuracy is negatively associated with the audit office market share and is positively associated with the audit office dismissal rate. Furthermore, we find that the decline in market share and the increase in dismissal rate are primarily associated with Type I errors. Additional analyses reveal that the negative consequence of going-concern opinion inaccuracy is lower for Big 4 audit offices. Finally, we find that the decrease in the audit office market share is explained by the distressed clients' reactions to Type I errors and audit offices' lack of ability to attract new clients.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Beth Ann Griffin ◽  
Megan S. Schuler ◽  
Elizabeth A. Stuart ◽  
Stephen Patrick ◽  
Elizabeth McNeer ◽  
...  

Abstract Background Reliable evaluations of state-level policies are essential for identifying effective policies and informing policymakers’ decisions. State-level policy evaluations commonly use a difference-in-differences (DID) study design; yet within this framework, statistical model specification varies notably across studies. More guidance is needed about which set of statistical models perform best when estimating how state-level policies affect outcomes. Methods Motivated by applied state-level opioid policy evaluations, we implemented an extensive simulation study to compare the statistical performance of multiple variations of the two-way fixed effect models traditionally used for DID under a range of simulation conditions. We also explored the performance of autoregressive (AR) and GEE models. We simulated policy effects on annual state-level opioid mortality rates and assessed statistical performance using various metrics, including directional bias, magnitude bias, and root mean squared error. We also reported Type I error rates and the rate of correctly rejecting the null hypothesis (e.g., power), given the prevalence of frequentist null hypothesis significance testing in the applied literature. Results Most linear models resulted in minimal bias. However, non-linear models and population-weighted versions of classic linear two-way fixed effect and linear GEE models yielded considerable bias (60 to 160%). Further, root mean square error was minimized by linear AR models when we examined crude mortality rates and by negative binomial models when we examined raw death counts. In the context of frequentist hypothesis testing, many models yielded high Type I error rates and very low rates of correctly rejecting the null hypothesis (< 10%), raising concerns of spurious conclusions about policy effectiveness in the opioid literature. When considering performance across models, the linear AR models were optimal in terms of directional bias, root mean squared error, Type I error, and correct rejection rates. Conclusions The findings highlight notable limitations of commonly used statistical models for DID designs, which are widely used in opioid policy studies and in state policy evaluations more broadly. In contrast, the optimal model we identified--the AR model--is rarely used in state policy evaluation. We urge applied researchers to move beyond the classic DID paradigm and adopt use of AR models.


2018 ◽  
Vol 37 (2) ◽  
pp. 1-25 ◽  
Author(s):  
Nathan R. Berglund ◽  
John Daniel Eshleman ◽  
Peng Guo

SUMMARY Auditing theory predicts that larger auditors will be more likely to issue a going concern opinion to a distressed client. However, the existing empirical evidence on this issue is mixed. We attribute these mixed results to a failure to adequately control for clients' financial health. We demonstrate how properly controlling for clients' financial health reveals a positive relationship between auditor size and the propensity to issue a going concern opinion. We corroborate our findings by replicating a related study and showing how the results change when financial health variables are added to the model. In supplemental analysis, we find that Big 4 auditors are more likely than mid-tier auditors (Grant Thornton and BDO Seidman) to issue going concern opinions to distressed clients. We also find that, compared to other auditors, the Big 4 are less likely to issue false-positive (Type I error) going concern opinions. We find no evidence that the Big 4 are more or less likely to fail to issue a going concern opinion to a client that eventually files for bankruptcy (Type II error). Our results are robust to the use of a variety of matching techniques. JEL Classifications: M41; M42.


2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


2021 ◽  
pp. 001316442199489
Author(s):  
Luyao Peng ◽  
Sandip Sinharay

Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of Wollack and Eckerly (2017) and Sinharay (2018) and suggests a new aggregate-level EDI by incorporating the empirical best linear unbiased predictor from the literature of linear mixed-effects models (e.g., McCulloch et al., 2008). A simulation study shows that the new EDI has larger power than the indices of Wollack and Eckerly (2017) and Sinharay (2018). In addition, the new index has satisfactory Type I error rates. A real data example is also included.


2001 ◽  
Vol 26 (1) ◽  
pp. 105-132 ◽  
Author(s):  
Douglas A. Powell ◽  
William D. Schafer

The robustness literature for the structural equation model was synthesized following the method of Harwell which employs meta-analysis as developed by Hedges and Vevea. The study focused on the explanation of empirical Type I error rates for six principal classes of estimators: two that assume multivariate normality (maximum likelihood and generalized least squares), elliptical estimators, two distribution-free estimators (asymptotic and others), and latent projection. Generally, the chi-square tests for overall model fit were found to be sensitive to non-normality and the size of the model for all estimators (with the possible exception of the elliptical estimators with respect to model size and the latent projection techniques with respect to non-normality). The asymptotic distribution-free (ADF) and latent projection techniques were also found to be sensitive to sample sizes. Distribution-free methods other than ADF showed, in general, much less sensitivity to all factors considered.


2005 ◽  
Vol 65 (1) ◽  
pp. 42-50 ◽  
Author(s):  
Christine E. Demars

Sign in / Sign up

Export Citation Format

Share Document