Legalized Gambling and Crime in Canada

2004 ◽  
Vol 95 (3) ◽  
pp. 747-753 ◽  
Author(s):  
F. Stephen Bridges ◽  
C. Bennett Williamson

In the 10 provinces and 2 territories of Canada in 2000, but not in 1990, the total number of types of gambling activities was positively associated with rates of robbery ( p < .05). Controls for other social variables did not eliminate these associations. With so many correlations in the present study the likelihood of a Type I error was quite large. Alpha was adjusted to control that likelihood. Statistical analysis now required even stronger evidence before concluding that there were significant relationships between crime and gambling variables or among gambling variables. In the 10 provinces of Canada in 1999/2000, the total numbers of electronic gambling machines for each province was associated with rates of theft over $5000 ( p < .01). In 1990 there were positive associations found for burglary with off-track betting and race/sportsbooks; motor vehicle theft with off-track betting, and race/sportsbooks; rate of theft with casinos; quarter horse racing with thoroughbred racing. In 2000 there were positive associations for robbery with casinos and slot machines; casinos with slot machines; scratch tickets with raffles, break-open tickets, sports tickets, and charitable bingo; raffles with break-open tickets, sports tickets, and charitable bingo; break-open tickets with sports tickets; charitable bingo with break-open tickets and sports tickets.

1998 ◽  
Vol 83 (1) ◽  
pp. 382-382 ◽  
Author(s):  
David Lester

In the 48 contiguous continental states in 1990, the total number of gambling activities was associated with robbery and motor vehicle theft rates. Controls for other social variables eliminated these associations.


2018 ◽  
Author(s):  
Alina Peluso ◽  
Robert Glen ◽  
Timothy M D Ebbels

AbstractMotivationA key issue in the omics literature is the search for statistically significant relationships between molecular markers and phenotype. The aim is to detect disease-related discriminatory features while controlling for false positive associations at adequate power. Metabolome-wide association studies have revealed significant relationships of metabolic phenotypes with disease risk by analysing hundreds to tens of thousands of molecular variables leading to multivariate data which are highly noisy and collinear. In this context, conventional Bonferroni or Sidak multiple testing corrections are rather useful as these are valid for independent tests, while permutation procedures allow for the estimation of significance levels from the null distribution without assuming independence among features. Nevertheless, under the permutation approach the distribution of p-values may present systematic deviations from the theoretical null distribution which leads to overly conservative adjusted threshold estimates i.e. smaller than a Bonferroni or Sidak correction.MethodsWe make use of parametric approximation methods based on a multivariate Normal distribution to derive stable estimates of the metabolome-wide significance level. A univariate approach is applied based on a permutation procedure which effectively controls the overall type I error rate at the α level.ResultsWe illustrate the approach for different model parametrizations and distributional features of the outcome measure, using both simulated and real data. We also investigate different levels of correlation within the features and between the features and the outcome.AvailabilityMWSL is an open-source R software package for the empirical estimation of the metabolome-wide significance level available at https://github.com/AlinaPeluso/MWSL.


2000 ◽  
Vol 14 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Joni Kettunen ◽  
Niklas Ravaja ◽  
Liisa Keltikangas-Järvinen

Abstract We examined the use of smoothing to enhance the detection of response coupling from the activity of different response systems. Three different types of moving average smoothers were applied to both simulated interbeat interval (IBI) and electrodermal activity (EDA) time series and to empirical IBI, EDA, and facial electromyography time series. The results indicated that progressive smoothing increased the efficiency of the detection of response coupling but did not increase the probability of Type I error. The power of the smoothing methods depended on the response characteristics. The benefits and use of the smoothing methods to extract information from psychophysiological time series are discussed.


Methodology ◽  
2012 ◽  
Vol 8 (1) ◽  
pp. 23-38 ◽  
Author(s):  
Manuel C. Voelkle ◽  
Patrick E. McKnight

The use of latent curve models (LCMs) has increased almost exponentially during the last decade. Oftentimes, researchers regard LCM as a “new” method to analyze change with little attention paid to the fact that the technique was originally introduced as an “alternative to standard repeated measures ANOVA and first-order auto-regressive methods” (Meredith & Tisak, 1990, p. 107). In the first part of the paper, this close relationship is reviewed, and it is demonstrated how “traditional” methods, such as the repeated measures ANOVA, and MANOVA, can be formulated as LCMs. Given that latent curve modeling is essentially a large-sample technique, compared to “traditional” finite-sample approaches, the second part of the paper addresses the question to what degree the more flexible LCMs can actually replace some of the older tests by means of a Monte-Carlo simulation. In addition, a structural equation modeling alternative to Mauchly’s (1940) test of sphericity is explored. Although “traditional” methods may be expressed as special cases of more general LCMs, we found the equivalence holds only asymptotically. For practical purposes, however, no approach always outperformed the other alternatives in terms of power and type I error, so the best method to be used depends on the situation. We provide detailed recommendations of when to use which method.


Methodology ◽  
2015 ◽  
Vol 11 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Jochen Ranger ◽  
Jörg-Tobias Kuhn

In this manuscript, a new approach to the analysis of person fit is presented that is based on the information matrix test of White (1982) . This test can be interpreted as a test of trait stability during the measurement situation. The test follows approximately a χ2-distribution. In small samples, the approximation can be improved by a higher-order expansion. The performance of the test is explored in a simulation study. This simulation study suggests that the test adheres to the nominal Type-I error rate well, although it tends to be conservative in very short scales. The power of the test is compared to the power of four alternative tests of person fit. This comparison corroborates that the power of the information matrix test is similar to the power of the alternative tests. Advantages and areas of application of the information matrix test are discussed.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


Sign in / Sign up

Export Citation Format

Share Document