average sample size
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
pp. 174702182110440
Author(s):  
Janine Hoffart ◽  
Jana Jarecki ◽  
Gilles Dutilh ◽  
Jörg Rieskamp

People often learn from experience about the distribution of outcomes of risky options. Typically, people draw small samples, when they can actively sample information from risky gambles to make decisions. We examine how the size of the sample that people experience in decision from experience affects their preferences between risky options. In two studies (N=40 each) we manipulated the size of samples that people could experience from risky gambles and measured subjective selling prices and the confidence in selling price judgments after sampling. The results show that, on average, sample size influenced neither the selling prices nor confidence. However, cognitive modeling of individual-level learning showed that most participants could be classified as Bayesian learners, whereas the minority adhered to a frequentist learning strategy and that if learning was cognitively simpler more participants adhered to the latter. The observed selling prices of Bayesian learners changed with sample size as predicted by Bayesian principles, whereas sample size affected the judgments of frequentist learners much less. These results illustrate the variability in how people learn from sampled information and provide an explanation for why sample size often does not affect judgments.


2021 ◽  
Vol 14 ◽  
Author(s):  
LiQin Sheng ◽  
HaiRong Ma ◽  
YuanYuan Shi ◽  
ZhenYu Dai ◽  
JianGuo Zhong ◽  
...  

Cortical thickness (CTh) via surface-based morphometry analysis is a popular method to characterize brain morphometry. Many studies have been performed to investigate CTh abnormalities in migraine. However, the results from these studies were not consistent and even conflicting. These divergent results hinder us to obtain a clear picture of brain morphometry regarding CTh alterations in migraine. Coordinate-based meta-analysis (CBMA) is a promising technique to quantitatively pool individual neuroimaging studies to identify consistent brain areas involved. Electronic databases (PubMed, EMBASE, Web of Science, China National Knowledge Infrastructure, WanFang, and SinoMed) and other sources (bioRxiv and reference lists of relevant articles and reviews) were systematically searched for studies that compared regional CTh differences between patients with migraine and healthy controls (HCs) up to May 15, 2020. A CBMA was performed using the Seed-based d Mapping with Permutation of Subject Images approach. In total, we identified 16 studies with 17 datasets reported that were eligible for the CBMA. The 17 datasets included 872 patients with migraine (average sample size 51.3, mean age 39.6 years, 721 females) and 949 HCs (average sample size 59.3, mean age 44.2 years, 680 females). The CBMA detected no statistically significant consistency of CTh alterations in patients with migraine relative to HCs. Sensitivity analysis and subgroup analysis verified this result to be robust. Metaregression analyses revealed that this CBMA result was not confounded by age, gender, aura, attack frequency per month, and illness duration. Our CBMA adds to the evidence of the replication crisis in neuroimaging research that is increasingly recognized. Many potential confounders, such as underpowered sample size, heterogeneous patient selection criteria, and differences in imaging collection and methodology, may contribute to the inconsistencies of CTh alterations in migraine, which merit attention before planning future research on this topic.


2020 ◽  
Author(s):  
LiQin Sheng ◽  
HaiRong Ma ◽  
YuanYuan Shi ◽  
ZhenYu Dai ◽  
JianGuo Zhong ◽  
...  

Abstract Background Cortical thickness (CTh) analysis is a popular method to characterize brain morphometry. Many studies have been performed to investigate CTh abnormalities in migraine. However, the results from these studies were not consistent and even conflicting. These divergent results hinder us from obtaining a clear picture of brain morphometry regarding CTh alterations in migraine. Coordinate-based meta-analysis (CBMA) is a promising technique to quantitatively pool individual neuroimaging studies to identify consistent brain areas involved. Methods Electronic databases (PubMed, Embase, Web of Science, China National Knowledge Infrastructure, WanFang, and SinoMed) and other sources (bioRxiv and reference lists of relevant articles and reviews) were systematically searched for studies that compared regional CTh differences between patients with migraine and healthy controls (HCs) up to May 15, 2020. A CBMA was performed using the Seed-based d Mapping with Permutation of Subject Images (SDM-PSI) approach. Results In total, we identified 16 studies with 17 datasets reported that were eligible for the CBMA. The 17 datasets included 872 patients with migraine (average sample size 51.3, mean age 39.6 years, 721 females) and 949 HCs (average sample size 59.3, mean age 44.2 years, 680 females). The CBMA detected no statistically significant consistency of CTh alterations in patients with migraine relative to HCs. Sensitivity analysis and subgroup analysis verified this result to be robust. Meta-regression analyses revealed that this CBMA result was not confounded by age, gender, aura, attack frequency per month, and illness duration. Conclusions Our CBMA adds to the evidence of the replication crisis in neuroimaging research that is increasingly recognized. The current evidence suggests that CTh is not a reliable biomarker of migraine. Many potential confounders, such as underpowered sample size, heterogeneous patient selection criteria, and differences in imaging collection and methodology, may contribute to the inconsistencies of CTh alterations in migraine, which merit attention before planning future research on this topic.


2019 ◽  
Author(s):  
Peter E Clayson ◽  
Kaylie Amanda Carbine ◽  
Scott Baldwin ◽  
Michael J. Larson

Methodological reporting guidelines for studies of event-related potentials (ERPs) were updated in Psychophysiology in 2014. These guidelines facilitate the communication of key methodological parameters (e.g., preprocessing steps). Failing to report key parameters represents a barrier to replication efforts, and difficultly with replicability increases in the presence of small sample sizes and low statistical power. We assessed whether guidelines are followed and estimated the average sample size and power in recent research. Reporting behavior, sample sizes, and statistical designs were coded for 150 randomly-sampled articles from five high-impact journals that frequently publish ERP research from 2011 to 2017. An average of 63% of guidelines were reported, and reporting behavior was similar across journals, suggesting that gaps in reporting is a shortcoming of the field rather than any specific journal. Publication of the guidelines paper had no impact on reporting behavior, suggesting that editors and peer reviewers are not enforcing these recommendations. The average sample size per group was 21. Statistical power was conservatively estimated as .72-.98 for a large effect size, .35-.73 for a medium effect, and .10-.18 for a small effect. These findings indicate that failing to report key guidelines is ubiquitous and that ERP studies are primarily powered to detect large effects. Such low power and insufficient following of reporting guidelines represent substantial barriers to replication efforts. The methodological transparency and replicability of studies can be improved by the open sharing of processing code and experimental tasks and by a priori sample size calculations to ensure adequately powered studies.


2018 ◽  
Vol 8 (1) ◽  
pp. 120
Author(s):  
Steven B. Kim ◽  
Jeffrey O. Wand

In medical, health, and sports sciences, researchers desire a device with high reliability and validity. This article focuses on reliability and validity studies with n subjects and m ≥2 repeated measurements per subject. High statistical power can be achieved by increasing n or m, and increasing m is often easier than increasing n in practice unless m is too high to result in systematic bias. The sequential probability ratio test (SPRT) is a useful statistical method which can conclude a null hypothesis H0 or an alternative hypothesis H1 with 50% of the required sample size of a non-sequential test on average. The traditional SPRT requires the likelihood function for each observed random variable, and it can be a practical burden for evaluating the likelihood ratio after each observation of a subject. Instead, m observed random variables per subject can be transformed into a test statistic which has a known sampling distribution under H0 and under H1. This allows us to formulate a SPRT based on a sequence of test statistics. In this article, three types of study are considered: reliabilityof a device, reliability of a device relative to a criterion device, and validity of a device relative to a  criterion device. Using SPRT for testing the reliability of a device, for small m, results in an average sample size of about 50% of the fixed sample size for a non-sequential test. For comparing a device to criterion, the average sample size approaches to 60% approximately as m increases. The SPRT tolerates violation of normality assumption for validity study, but it does not for reliability study.


Sign in / Sign up

Export Citation Format

Share Document