scholarly journals The Eight Steps of Data Analysis: A Graphical Framework to Promote Sound Statistical Analysis

2019 ◽  
Author(s):  
Dustin Fife

Data analysis is a risky endeavor, particularly among those unaware of its dangers. In the words of Cook and Campbell (1976; see also Cook, Campbell, and Shadish 2002), “Statistical Conclusions Validity” threatens all experiments that subject themselves to the dark arts of statistical magic. Although traditional statistics classes may advise against certain practices (e.g., multiple comparisons, small sample sizes, violating normality), they may fail to cover others (e.g., outlier detection and violating linearity). More common, perhaps, is that researchers may fail to remember them. In this paper, rather than rehashing old warnings and diatribes against this practice or that, I instead advocate a general statistical analysis strategy. This graphically-based eight step strategy promises to resolve the majority of statistical traps researchers may fall in without having to remember large lists of problematic statistical practices. These steps will assist in preventing both Type I and Type II errors and yield critical insights about the data that would have otherwise been missed. I conclude with an applied example that shows how the eight steps highlight data problems that would not be detected with standard statistical practices.

2020 ◽  
Vol 15 (4) ◽  
pp. 1054-1075
Author(s):  
Dustin Fife

Data analysis is a risky endeavor, particularly among people who are unaware of its dangers. According to some researchers, “statistical conclusions validity” threatens all research subjected to the dark arts of statistical magic. Although traditional statistics classes may advise against certain practices (e.g., multiple comparisons, small sample sizes, violating normality), they may fail to cover others (e.g., outlier detection and violating linearity). More common, perhaps, is that researchers may fail to remember them. In this article, rather than rehashing old warnings and diatribes against this practice or that, I instead advocate a general statistical-analysis strategy. This graphic-based eight-step strategy promises to resolve the majority of statistical traps researchers may fall into—without having to remember large lists of problematic statistical practices. These steps will assist in preventing both false positives and false negatives and yield critical insights about the data that would have otherwise been missed. I conclude with an applied example that shows how the eight steps reveal interesting insights that would not be detected with standard statistical practices.


2020 ◽  
pp. 37-55 ◽  
Author(s):  
A. E. Shastitko ◽  
O. A. Markova

Digital transformation has led to changes in business models of traditional players in the existing markets. What is more, new entrants and new markets appeared, in particular platforms and multisided markets. The emergence and rapid development of platforms are caused primarily by the existence of so called indirect network externalities. Regarding to this, a question arises of whether the existing instruments of competition law enforcement and market analysis are still relevant when analyzing markets with digital platforms? This paper aims at discussing advantages and disadvantages of using various tools to define markets with platforms. In particular, we define the features of the SSNIP test when being applyed to markets with platforms. Furthermore, we analyze adjustment in tests for platform market definition in terms of possible type I and type II errors. All in all, it turns out that to reduce the likelihood of type I and type II errors while applying market definition technique to markets with platforms one should consider the type of platform analyzed: transaction platforms without pass-through and non-transaction matching platforms should be tackled as players in a multisided market, whereas non-transaction platforms should be analyzed as players in several interrelated markets. However, if the platform is allowed to adjust prices, there emerges additional challenge that the regulator and companies may manipulate the results of SSNIP test by applying different models of competition.


2018 ◽  
Vol 41 (1) ◽  
pp. 1-30 ◽  
Author(s):  
Chelsea Rae Austin

ABSTRACT While not explicitly stated, many tax avoidance studies seek to investigate tax avoidance that is the result of firms' deliberate actions. However, measures of firms' tax avoidance can also be affected by factors outside the firms' control—tax surprises. This study examines potential complications caused by tax surprises when measuring tax avoidance by focusing on one specific type of surprise tax savings—the unanticipated tax benefit from employees' exercise of stock options. Because the cash effective tax rate (ETR) includes the benefits of this tax surprise, the cash ETR mismeasures firms' deliberate tax avoidance. The analyses conducted show this mismeasurement is material and can lead to both Type I and Type II errors in studies of deliberate tax avoidance. Suggestions to aid researchers in mitigating these concerns are also provided.


1999 ◽  
Vol 18 (1) ◽  
pp. 37-54 ◽  
Author(s):  
Andrew J. Rosman ◽  
Inshik Seol ◽  
Stanley F. Biggs

The effect of different task settings within an industry on auditor behavior is examined for the going-concern task. Using an interactive computer process-tracing method, experienced auditors from four Big 6 accounting firms examined cases based on real data that differed on two dimensions of task settings: stage of organizational development (start-up and mature) and financial health (bankrupt and nonbankrupt). Auditors made judgments about each entity's ability to continue as a going concern and, if they had substantial doubt about continued existence, they listed evidence they would seek as mitigating factors. There are seven principal results. First, information acquisition and, by inference, problem representations were sensitive to differences in task settings. Second, financial mitigating factors dominated nonfinancial mitigating factors in both start-up and mature settings. Third, auditors' behavior reflected configural processing. Fourth, categorizing information into financial and nonfinancial dimensions was critical to understanding how auditors' information acquisition and, by inference, problem representations differed across settings. Fifth, Type I errors (determining that a healthy company is a going-concern problem) differed from correct judgments in terms of information acquisition, although Type II errors (determining that a problem company is viable) did not. This may indicate that Type II errors are primarily due to deficiencies in other stages of processing, such as evaluation. Sixth, auditors who were more accurate tended to follow flexible strategies for financial information acquisition. Finally, accurate performance in the going-concern task was found to be related to acquiring (1) fewer information cues, (2) proportionately more liquidity information and (3) nonfinancial information earlier in the process.


PEDIATRICS ◽  
1973 ◽  
Vol 51 (4) ◽  
pp. 753-753
Author(s):  
Emperor Watcher ◽  
C. A. S.

Was the layout editor making a sly comment on the present state of American pediatrics by juxtaposing Mrs. Seymour's letter with the articles concerning Child Health Associates in the January issue (Pediatrics 51:1-16, 1973)? If the word "pediatrician" is substituted for "surgeon " in the 1754 letter, it has a surprisingly modern ring. One gets the impression from reading the four articles that CHAs have demonstrated that they are capable of doing good when compared with practicing pediatricians, but it is not clear whether evidence has been collected to deal with the question of whether the associates cause less harm (in testing hypotheses one is liable to two kinds of error, and the relationships between type I and type II errors is the basis for the Neyman-Pearson theory).


1989 ◽  
Vol 25 (3) ◽  
pp. 451-454 ◽  
Author(s):  
Joel Berger ◽  
Michael D. Kock
Keyword(s):  
Type I ◽  
Type Ii ◽  
The Real ◽  

2019 ◽  
Vol 8 (4) ◽  
pp. 1849-1853

Nowadays people are interested to avail loans in banks for their needs, but providing loans to all people is not possible to banks, so they are using some measures to identify eligible customers. To measure the performance of categorical variables sensitivity and specificity are widely used in Medical and tangentially in econometrics, after using some measures also if banks provide the loans to the wrong customers whom might not able to repay the loans, and not providing to customers who can repay will lead to the type I errors and type II errors, to minimize these errors, this study explains one, how to know sensitivity is large or small and second to study the bench marks on forecasting the model by Fuzzy analysis based on fuzzy based weights and it is compared with the sensitivity analysis.


Sign in / Sign up

Export Citation Format

Share Document