scholarly journals JASP for Audit: Bayesian Tools for the Auditing Practice

2019 ◽  
Author(s):  
Koen Derks ◽  
Jacques de Swart ◽  
Eric-Jan Wagenmakers ◽  
Jan Wille ◽  
ruud wetzels

Statistical theory is fundamental to many auditing guidelines and procedures. In order to assist auditors with the required statistical analyses, and to advocate state-of-the-art Bayesian methods, we introduce JASP for Audit (JfA). JfA is easy-to-use, free-of-charge software that automatically follows the standard audit workflow, selects the appropriate statistical analysis, interprets the results, and produces a readable report. This approach reduces the potential for statistical errors and therefore increases audit quality. Next to the frequentist methods that currently dominate audit practice, JfA incorporates Bayesian counterparts of these methods that come with several advantages. For example, Bayesian statistics allows incorporation of expert knowledge directly into the statistical analyses, allowing for a decrease in sample size, and an increase in efficiency. In sum, JfA is designed with the auditor in mind, it guides the auditor through the statistical aspects of an audit, and therefore has the potential to increase audit efficiency and quality.

2011 ◽  
Vol 34 (4) ◽  
pp. 206-207 ◽  
Author(s):  
Michael D. Lee

AbstractJones & Love (J&L) should have given more attention to Agnostic uses of Bayesian methods for the statistical analysis of models and data. Reliance on the frequentist analysis of Bayesian models has retarded their development and prevented their full evaluation. The Ecumenical integration of Bayesian statistics to analyze Bayesian models offers a better way to test their inferential and predictive capabilities.


2019 ◽  
Author(s):  
Dominique Makowski ◽  
Mattan S. Ben-Shachar ◽  
SH Annabel Chen ◽  
Daniel Lüdecke

Turmoil has engulfed psychological science. Causes and consequences of the reproducibility crisis are in dispute. With the hope of addressing some of its aspects, Bayesian methods are gaining increasing attention in psychological science. Some of their advantages, as opposed to the frequentist framework, are the ability to describe parameters in probabilistic terms and explicitly incorporate prior knowledge about them into the model. These issues are crucial in particular regarding the current debate about statistical significance. Bayesian methods are not necessarily the only remedy against incorrect interpretations or wrong conclusions, but there is an increasing agreement that they are one of the keys to avoid such fallacies. Nevertheless, its flexible nature is its power and weakness, for there is no agreement about what indices of “significance” should be computed or reported. This lack of a consensual index or guidelines, such as the frequentist p-value, further contributes to the unnecessary opacity that many non-familiar readers perceive in Bayesian statistics. Thus, this study describes and compares several Bayesian indices, provide intuitive visual representation of their “behavior” in relationship with common sources of variance such as sample size, magnitude of effects and also frequentist significance. The results contribute to the development of an intuitive understanding of the values that researchers report, allowing to draw sensible recommendations for Bayesian statistics description, critical for the standardization of scientific reporting.


2014 ◽  
Vol 9 (2) ◽  
pp. 153-163
Author(s):  
Piotr Bednarek

The paper reports the major results of a study of performance measurement of internal auditing in various organizations operating in Poland in 2013 and prospects for development. The research implies that many internal audit departments formally do not measure performance, while others do measure, but only informally. Many times satisfaction indicators of key internal audit stakeholders are not identified, the information on performance is not reported to anyone apart from the internal audit staff, and the information is not used for continues improvement. The most often used performance measures in practice are focused on measuring effectiveness of audit processes and impact of internal audit services on organizations' performance. In addition there are internal audit efficiency and output measured. Quality measures are less common. Stakeholders, scope and usage of performance measurement are related to various organizational characteristics. Many respondents have declared that in future will start and formalize performance measurement and based on it they will improve internal audit performance. In future performance measurement will be more focused on internal audit quality and value added.


1986 ◽  
Author(s):  
Simon S. Kim ◽  
Mary Lou Maher ◽  
Raymond E. Levitt ◽  
Martin F. Rooney ◽  
Thomas J. Siller

1992 ◽  
Vol 25 (4-5) ◽  
pp. 399-400 ◽  
Author(s):  
L. Cingolani ◽  
M. Cossignani ◽  
R. Miliani

Statistical analyses were applied to data from a series of 38 samples collected in an aerobic treatment plant from November 1989 to December 1990. Relationships between microfauna structure and plant operating conditions were found. Amount and quality of microfauna groups and species found in activated sludge proved useful to suggest the possible causes of disfunctions.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-35
Author(s):  
Muhammad Anis Uddin Nasir ◽  
Cigdem Aslay ◽  
Gianmarco De Francisci Morales ◽  
Matteo Riondato

“Perhaps he could dance first and think afterwards, if it isn’t too much to ask him.” S. Beckett, Waiting for Godot Given a labeled graph, the collection of -vertex induced connected subgraph patterns that appear in the graph more frequently than a user-specified minimum threshold provides a compact summary of the characteristics of the graph, and finds applications ranging from biology to network science. However, finding these patterns is challenging, even more so for dynamic graphs that evolve over time, due to the streaming nature of the input and the exponential time complexity of the problem. We study this task in both incremental and fully-dynamic streaming settings, where arbitrary edges can be added or removed from the graph. We present TipTap , a suite of algorithms to compute high-quality approximations of the frequent -vertex subgraphs w.r.t. a given threshold, at any time (i.e., point of the stream), with high probability. In contrast to existing state-of-the-art solutions that require iterating over the entire set of subgraphs in the vicinity of the updated edge, TipTap operates by efficiently maintaining a uniform sample of connected -vertex subgraphs, thanks to an optimized neighborhood-exploration procedure. We provide a theoretical analysis of the proposed algorithms in terms of their unbiasedness and of the sample size needed to obtain a desired approximation quality. Our analysis relies on sample-complexity bounds that use Vapnik–Chervonenkis dimension, a key concept from statistical learning theory, which allows us to derive a sufficient sample size that is independent from the size of the graph. The results of our empirical evaluation demonstrates that TipTap returns high-quality results more efficiently and accurately than existing baselines.


2021 ◽  
Vol 66 ◽  
pp. 126762
Author(s):  
Emma Shardlow ◽  
Caroline Linhart ◽  
Sameerah Connor ◽  
Erin Softely ◽  
Christopher Exley

Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 603
Author(s):  
Leonid Hanin

I uncover previously underappreciated systematic sources of false and irreproducible results in natural, biomedical and social sciences that are rooted in statistical methodology. They include the inevitably occurring deviations from basic assumptions behind statistical analyses and the use of various approximations. I show through a number of examples that (a) arbitrarily small deviations from distributional homogeneity can lead to arbitrarily large deviations in the outcomes of statistical analyses; (b) samples of random size may violate the Law of Large Numbers and thus are generally unsuitable for conventional statistical inference; (c) the same is true, in particular, when random sample size and observations are stochastically dependent; and (d) the use of the Gaussian approximation based on the Central Limit Theorem has dramatic implications for p-values and statistical significance essentially making pursuit of small significance levels and p-values for a fixed sample size meaningless. The latter is proven rigorously in the case of one-sided Z test. This article could serve as a cautionary guidance to scientists and practitioners employing statistical methods in their work.


1966 ◽  
Vol 49 (3) ◽  
pp. 511-515 ◽  
Author(s):  
R W Henningson

Abstract Bath level, sample temperature, rate of stirring, degree of supercooling, sample size, sample isolation, and refreezing of the sample were the variables in the thermistor cryoscopic method for the determination of the freezing point value of milk chosen for study. Freezing point values were determined for two samples of milk and two secondary salt standards utilizing eight combinations of the seven variables in two test patterns. The freezing point value of the salt standards ranged from –0.413 to –0.433°C and from –0.431 to –0.642°C. The freezing point values of the milk samples ranged from –0.502 to –0.544°C and from –0.518 to –0.550°C. Statistical analysis of the data showed that sample isolation was a poor procedure and that other variables produced changes in the freezing point value ranging from 0.001 to 0.011°C. It is recommended that specific directions be instituted for the thermistor cryoscopic method, 15.040–15.041, and that the method be subjected to a collaborative study.


Sign in / Sign up

Export Citation Format

Share Document