scholarly journals Fast Random Permutation Tests Enable Objective Evaluation of Methods for Single-Subject fMRI Analysis

2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Anders Eklund ◽  
Mats Andersson ◽  
Hans Knutsson

Parametric statistical methods, such asZ-,t-, andF-values, are traditionally employed in functional magnetic resonance imaging (fMRI) for identifying areas in the brain that are active with a certain degree of statistical significance. These parametric methods, however, have two major drawbacks. First, it is assumed that the observed data are Gaussian distributed and independent; assumptions that generally are not valid for fMRI data. Second, the statistical test distribution can be derived theoretically only for very simple linear detection statistics. With nonparametric statistical methods, the two limitations described above can be overcome. The major drawback of non-parametric methods is the computational burden with processing times ranging from hours to days, which so far have made them impractical for routine use in single-subject fMRI analysis. In this work, it is shown how the computational power of cost-efficient graphics processing units (GPUs) can be used to speed up random permutation tests. A test with 10000 permutations takes less than a minute, making statistical analysis of advanced detection methods in fMRI practically feasible. To exemplify the permutation-based approach, brain activity maps generated by the general linear model (GLM) and canonical correlation analysis (CCA) are compared at the same significance level.

2018 ◽  
Vol 7 (2.27) ◽  
pp. 161
Author(s):  
Pratiksha Sharma ◽  
Er. Arshpreet Kaur

Detection of bad smells refers to any indication in the program code of a execution that perhaps designate a issue, maintain the software and software evolution. Code Smell detection is a main challenging for software developers and their informal classification direct to the designing of various smell detection methods and software tools. It appraises 4 code smell detection tool in software like as a in Fusion, JDeodorant, PMD and Jspirit. In this research proposes a method for detection the bad code smells in software is called as code smell. Bad smell detection in software, OOSMs are used to identify the Source Code whereby Plug-in were implemented for code detection in which position of program initial code the bad smell appeared so that software refactoring can then acquire position. Classified the code smell, as a type of codes: long method, PIH, LPL, LC, SS and GOD class etc. Detection of the code smell and as a result applying the correct detection phases when require is significant to enhance the Quality of the code or program. The various tool has been proposed for detection of the code smell each one featured by particular properties. The main objective of this research work described our proposed method on using various tools for code smell detection. We find the major differences between them and dissimilar consequences we attained. The major drawback of current research work is that it focuses on one particular language which makes them restricted to one kind of programs only. These tools fail to detect the smelly code if any kind of change in environment is encountered. The base paper compares the most popular code smell detection tools on basis of various factors like accuracy, False Positive Rate etc. which gives a clear picture of functionality these tools possess. In this paper, a unique technique is designed to identify CSs. For this purpose, various object-oriented programming (OOPs)-based-metrics with their maintainability index are used. Further, code refactoring and optimization technique are applied to obtain low maintainability Index. Finally, the proposed scheme is evaluated to achieve satisfactory results. The results of the BFOA test defined that the lazy class caused framework defects in DLS, DR, and SE. However, the LPL caused no framework defects what so ever. The consequences of the connection rules test searched that the LCCS (Lazy Class Code Smell) caused structured defects in DE and DLS, which corresponded to the consequences of the BFOA test. In this research work, a proposed method is designed to verify the code smell. For this purpose, different OOPs based Software Metrics with their MI (Maintainability Index) are utilized. Further Code refactoring and optimization method id applied to attained the less maintainability index and evaluated to achieved satisfactory results.    


Author(s):  
Richard J. Hornick

The arena of forensics often requires that the human factors expert witness do research that deviates from the classical paradigm of rigid control of independent variables to obtain resulting data. Indeed, in the judicial process, an experiment with only one independent variable and a single subject (or device) may reach beyond statistical significance and be vitally pivotal to judicial decisions. This article provides examples in which very limited research experiments were conducted by the author and that provided critical data to assist juries to reach conclusions pertinent to human behavior and equipment design in accident situations.


2019 ◽  
Vol 81 (8) ◽  
pp. 535-542
Author(s):  
Robert A. Cooper

Statistical methods are indispensable to the practice of science. But statistical hypothesis testing can seem daunting, with P-values, null hypotheses, and the concept of statistical significance. This article explains the concepts associated with statistical hypothesis testing using the story of “the lady tasting tea,” then walks the reader through an application of the independent-samples t-test using data from Peter and Rosemary Grant's investigations of Darwin's finches. Understanding how scientists use statistics is an important component of scientific literacy, and students should have opportunities to use statistical methods like this in their science classes.


2012 ◽  
Vol 12 (13) ◽  
pp. 5755-5771 ◽  
Author(s):  
A. Sanchez-Lorenzo ◽  
P. Laux ◽  
H.-J. Hendricks Franssen ◽  
J. Calbó ◽  
S. Vogl ◽  
...  

Abstract. Several studies have claimed to have found significant weekly cycles of meteorological variables appearing over large domains, which can hardly be related to urban effects exclusively. Nevertheless, there is still an ongoing scientific debate whether these large-scale weekly cycles exist or not, and some other studies fail to reproduce them with statistical significance. In addition to the lack of the positive proof for the existence of these cycles, their possible physical explanations have been controversially discussed during the last years. In this work we review the main results about this topic published during the recent two decades, including a summary of the existence or non-existence of significant weekly weather cycles across different regions of the world, mainly over the US, Europe and Asia. In addition, some shortcomings of common statistical methods for analyzing weekly cycles are listed. Finally, a brief summary of supposed causes of the weekly cycles, focusing on the aerosol-cloud-radiation interactions and their impact on meteorological variables as a result of the weekly cycles of anthropogenic activities, and possible directions for future research, is presented.


2017 ◽  
Vol 144 ◽  
pp. 1-11
Author(s):  
Kaya Oguz ◽  
Muhammed G. Cinsdikici ◽  
Ali Saffet Gonul

2013 ◽  
Vol 462-463 ◽  
pp. 187-192
Author(s):  
Jing Bo Chen ◽  
Jun Bao Zheng ◽  
Lei Yang ◽  
Ya Ming Wang

General review of Change-Points detection methods applied in Interrupted Time Series Analysis for recent years. Articles from domains like meteorology, hydrology, stock analysis, sequences mining et al. are compared together. The literatures range from the 1980s to 2013. The methods are generally classified in Parametric, Semi-Parametric, and Nonparametric. Some non-statistical methods are also mentioned in this review. Characters of each method are briefly summarized. As all methods mentioned in this review share a common purpose that to detect change-points, most of them can be used in other domains after some proper adjustment.


2019 ◽  
Author(s):  
Chris Hubertus Joseph Hartgerink ◽  
Jan G. Voelkel ◽  
Jelte M. Wicherts ◽  
Marcel A. L. M. van Assen

Scientific misconduct potentially invalidates findings in many scientific fields. Improved detection of unethical practices like data fabrication is considered to deter such practices. In two studies, we investigated the diagnostic performance of various statistical methods to detect fabricated quantitative data from psychological research. In Study 1, we tested the validity of statistical methods to detect fabricated data at the study level using summary statistics. Using (arguably) genuine data from the Many Labs 1 project on the anchoring effect (k=36) and fabricated data for the same effect by our participants (k=39), we tested the validity of our newly proposed 'reversed Fisher method', variance analyses, and extreme effect sizes, and a combination of these three indicators using the original Fisher method. Results indicate that the variance analyses perform fairly well when the homogeneity of population variances is accounted for and that extreme effect sizes perform similarly well in distinguishing genuine from fabricated data. The performance of the 'reversed Fisher method' was poor and depended on the types of tests included. In Study 2, we tested the validity of statistical methods to detect fabricated data using raw data. Using (arguably) genuine data from the Many Labs 3 project on the classic Stroop task (k=21) and fabricated data for the same effect by our participants (k=28), we investigated the performance of digit analyses, variance analyses, multivariate associations, and extreme effect sizes, and a combination of these four methods using the original Fisher method. Results indicate that variance analyses, extreme effect sizes, and multivariate associations perform fairly well to excellent in detecting fabricated data using raw data, while digit analyses perform at chance levels. The two studies provide mixed results on how the use of random number generators affects the detection of data fabrication. Ultimately, we consider the variance analyses, effect sizes, and multivariate associations valuable tools to detect potential data anomalies in empirical (summary or raw) data. However, we argue against widespread (possible automatic) application of these tools, because some fabricated data may be irregular in one aspect but not in another. Considering how violations of the assumptions of fabrication detection methods may yield high false positive or false negative probabilities, we recommend comparing potentially fabricated data to genuine data on the same topic.


Author(s):  
I. B. Tsorin

The article discusses descriptive statistics of data measured in ordinal and quantitative scales, and criteria for determining the statistical significance of differences between samples when it is impossible to analyze using parametric methods. Special attention is paid to the problem of multiple comparisons of this type of data. For each method, examples of processing data obtained in pharmacological studies are given.


Sign in / Sign up

Export Citation Format

Share Document