An Experimental Assessment of Recent Professional Developments in Nonstatistical Audit Sampling Guidance

2001 ◽  
Vol 20 (1) ◽  
pp. 81-96 ◽  
Author(s):  
William F. Messier ◽  
Steven J. Kachelmeier ◽  
Kevan L. Jensen

The American Institute of Certified Public Accountants has recently set forth significant revisions in its nonstatistical audit sample-size decision aid (AICPA 1999). In a controlled setting involving 149 experienced auditors, we test the effects of the new guidance on auditors' sample-size judgments, extending Kachelmeier and Messier's (1990) (KM) investigation of a previous AICPA (1983) decision aid. We find that the current decision aid results in significantly smaller sample sizes than the previous aid. Further, auditors continue to “work backward” in their choice of decision aid inputs, resulting in sample sizes that are more intuitively acceptable. An optional supplemental worksheet added to the AICPA's guidance to assist the auditor in specifying tolerable misstatement generates a marginal increase in sample sizes, but does not eliminate the working-backward phenomenon. However, the supplemental worksheet significantly reduces sample size variability. Additional findings update the conclusions in KM by showing that the excess of decision-aided sample sizes over intuitive sample sizes in their study no longer applies. A final extension addresses a limitation in KM by showing that sample-size judgments are not sensitive to the variation of population size as a separate treatment factor. Overall, this study directs focus to an improved understanding of nonstatistical sampling judgments, which are of increasing importance in the contemporary audit environment.

2002 ◽  
Vol 16 (2) ◽  
pp. 125-136 ◽  
Author(s):  
Thomas W. Hall ◽  
James E. Hunton ◽  
Bethane Jo Pierce

Although audit sampling is a common procedure, relatively little is known about the sampling practices of auditors in public accounting, industry, and government. This study surveyed practicing auditors to determine how they: (1) planned sample sizes, (2) selected sample items, and (3) evaluated sample outcomes. Respondents also provided data on the training received, debiasing techniques employed when using nonstatistical (judgmental) methods, and literature sources relied on to provide guidance regarding sampling matters. Respondents in all areas of practice reported that a majority of audit sampling applications rely on nonstatistical methods for sample planning, selection, and evaluation. Despite the heavy reliance on nonstatistical methods, less than 10 percent of respondents reported receiving training in debiasing techniques, and no respondents reported using these techniques. Among statistical methods dollar-unit sampling is the most frequently employed technique. All respondents reported reliance on employer guidelines, and most reported reliance on sampling standards promulgated by the American Institute of Certified Public Accountants.


2001 ◽  
Vol 20 (1) ◽  
pp. 169-185 ◽  
Author(s):  
Thomas W. Hall ◽  
Terri L. Herron ◽  
Bethane Jo Pierce ◽  
Terry J. Witt

Over 40 years ago both Deming (1954) and Arkin (1957) expressed concerns that the composition of samples chosen through haphazard selection may be unrepresentative due to the presence of unintended selection biases. To mitigate this problem some experts in the field of audit sampling recommend increasing sample sizes by up to 100 percent when utilizing haphazard selection. To examine the effectiveness of this recommendation 142 participants selected haphazard samples from two populations. The compositions of these samples were then analyzed to determine if certain population elements were overrepresented, and if the extent of overrepresentation declined as sample size increased. Analyses disclosed that certain population elements were overrepresented in the samples. Also, increasing sample size produced no statistically significant change in the composition of samples from one population, while in the second population increasing the sample size produced a statistically significant but minor reduction in overrepresentation. These results suggest that individuals may be incapable of complying with audit guidelines that haphazard sample selections be made without regard to the observable physical features of population elements and cast doubt on the effectiveness of using larger sample sizes to mitigate the problem. Given these findings, standard-setting bodies should reconsider the conditions under which haphazard sampling is sanctioned as a reliable audit tool.


2015 ◽  
Vol 42 (1) ◽  
pp. 85-104 ◽  
Author(s):  
Martin E. Persson ◽  
Vaughan S. Radcliffe ◽  
Mitchell Stein

Alvin R. Jennings (1905–1990) was a rare breed of an accountant. He was trained as a practitioner and rose to become a managing partner at Lybrand, Ross Bros. & Montgomery, but he kept a constant watch on the academic field of accounting research. Jennings served on the influential American Institute of Accountants' Committee on Auditing Procedure (1946–49) and later as the president of the American Institute of Certified Public Accountants (1957–58). This paper explores these activities and Jennings' contribution to the professional, academic, and institution discourse of the accounting discipline.


2006 ◽  
Vol 33 (2) ◽  
pp. 157-168 ◽  
Author(s):  
Royce D. Kurtz ◽  
David K. Herrera ◽  
Stephanie D. Moussalli

The University of Mississippi Library has digitized the Accounting Historians Journal from 1974 through 1992, cover-to-cover. The American Institute of Certified Public Accountants' gift of their library to the University of Mississippi was, fortuitously, the impetus for the AHJ digitizing project. A complicated chain of events followed which included discussions with the Academy of Accounting Historians for copyright permission, an application for a federal grant, negotiations with software vendors, and decisions about search capabilities and display formats. Each article in AHJ is now full-text searchable with accompanying PDF page images.


1979 ◽  
Vol 6 (1) ◽  
pp. 29-37 ◽  
Author(s):  
John L. Carey

The recollections of John L. Carey about the policies and politics in professional circles during the very important period when the Securities Exchange Commission first came into being. Mr. Carey served the American Institute of Certified Public Accountants in various capacities from 1925 to 1969, including editor of The Journal of Accountancy and Administrative Vice-president, and received the Institute's gold medal for distinguished service to the profession.


2000 ◽  
Vol 19 (2) ◽  
pp. 176-182 ◽  
Author(s):  
Barry N. Winograd ◽  
James S. Gerson ◽  
Barbara L. Berlin

This paper discusses the development of the PricewaterhouseCoopers Audit Approach (PwCAA), identifies distinctive features of this approach, and provides information on new development areas. The discussion will provide a summary of each of these items and will focus on the distinctive features of the PwCAA. The article will not cover elements that appear to be consistent with other firm methodologies. Significant consistencies exist since all of the major international firms essentially operate under generally accepted auditing standards, i.e., the International Standards on Auditing (ISA) as established by the International Federation of Accountants. In the United States, they also comply with generally accepted auditing standards (GAAS) as established by the American Institute of Certified Public Accountants (AICPA).


2021 ◽  
Vol 13 (3) ◽  
pp. 368
Author(s):  
Christopher A. Ramezan ◽  
Timothy A. Warner ◽  
Aaron E. Maxwell ◽  
Bradley S. Price

The size of the training data set is a major determinant of classification accuracy. Nevertheless, the collection of a large training data set for supervised classifiers can be a challenge, especially for studies covering a large area, which may be typical of many real-world applied projects. This work investigates how variations in training set size, ranging from a large sample size (n = 10,000) to a very small sample size (n = 40), affect the performance of six supervised machine-learning algorithms applied to classify large-area high-spatial-resolution (HR) (1–5 m) remotely sensed data within the context of a geographic object-based image analysis (GEOBIA) approach. GEOBIA, in which adjacent similar pixels are grouped into image-objects that form the unit of the classification, offers the potential benefit of allowing multiple additional variables, such as measures of object geometry and texture, thus increasing the dimensionality of the classification input data. The six supervised machine-learning algorithms are support vector machines (SVM), random forests (RF), k-nearest neighbors (k-NN), single-layer perceptron neural networks (NEU), learning vector quantization (LVQ), and gradient-boosted trees (GBM). RF, the algorithm with the highest overall accuracy, was notable for its negligible decrease in overall accuracy, 1.0%, when training sample size decreased from 10,000 to 315 samples. GBM provided similar overall accuracy to RF; however, the algorithm was very expensive in terms of training time and computational resources, especially with large training sets. In contrast to RF and GBM, NEU, and SVM were particularly sensitive to decreasing sample size, with NEU classifications generally producing overall accuracies that were on average slightly higher than SVM classifications for larger sample sizes, but lower than SVM for the smallest sample sizes. NEU however required a longer processing time. The k-NN classifier saw less of a drop in overall accuracy than NEU and SVM as training set size decreased; however, the overall accuracies of k-NN were typically less than RF, NEU, and SVM classifiers. LVQ generally had the lowest overall accuracy of all six methods, but was relatively insensitive to sample size, down to the smallest sample sizes. Overall, due to its relatively high accuracy with small training sample sets, and minimal variations in overall accuracy between very large and small sample sets, as well as relatively short processing time, RF was a good classifier for large-area land-cover classifications of HR remotely sensed data, especially when training data are scarce. However, as performance of different supervised classifiers varies in response to training set size, investigating multiple classification algorithms is recommended to achieve optimal accuracy for a project.


2013 ◽  
Vol 113 (1) ◽  
pp. 221-224 ◽  
Author(s):  
David R. Johnson ◽  
Lauren K. Bachan

In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size ( n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.


Sign in / Sign up

Export Citation Format

Share Document