Simulation, bootstrap statistical methods, and permutation tests

Author(s):  
Sheldon M. Ross

2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Anders Eklund ◽  
Mats Andersson ◽  
Hans Knutsson

Parametric statistical methods, such asZ-,t-, andF-values, are traditionally employed in functional magnetic resonance imaging (fMRI) for identifying areas in the brain that are active with a certain degree of statistical significance. These parametric methods, however, have two major drawbacks. First, it is assumed that the observed data are Gaussian distributed and independent; assumptions that generally are not valid for fMRI data. Second, the statistical test distribution can be derived theoretically only for very simple linear detection statistics. With nonparametric statistical methods, the two limitations described above can be overcome. The major drawback of non-parametric methods is the computational burden with processing times ranging from hours to days, which so far have made them impractical for routine use in single-subject fMRI analysis. In this work, it is shown how the computational power of cost-efficient graphics processing units (GPUs) can be used to speed up random permutation tests. A test with 10000 permutations takes less than a minute, making statistical analysis of advanced detection methods in fMRI practically feasible. To exemplify the permutation-based approach, brain activity maps generated by the general linear model (GLM) and canonical correlation analysis (CCA) are compared at the same significance level.



1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.



1973 ◽  
Vol 18 (11) ◽  
pp. 562-562
Author(s):  
B. J. WINER
Keyword(s):  


1996 ◽  
Vol 41 (12) ◽  
pp. 1224-1224
Author(s):  
Terri Gullickson
Keyword(s):  


1979 ◽  
Vol 24 (6) ◽  
pp. 536-536
Author(s):  
JOHN W. COTTON
Keyword(s):  






Sign in / Sign up

Export Citation Format

Share Document