sample size justification
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 7)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
Author(s):  
James Edward Bartlett ◽  
Sarah Jane Charles

Authors have highlighted for decades that sample size justification through power analysis is the exception rather than the rule. Even when authors do report a power analysis, there is often no justification for the smallest effect size of interest, or they do not provide enough information for the analysis to be reproducible. We argue one potential reason for these omissions is the lack of a truly accessible introduction to the key concepts and decisions behind power analysis. In this tutorial, we demonstrate a priori and sensitivity power analysis using jamovi for two independent samples and two dependent samples. Respectively, these power analyses allow you to ask the questions: “How many participants do I need to detect a given effect size?”, and “What effect sizes can I detect with a given sample size?”. We emphasise how power analysis is most effective as a reflective process during the planning phase of research to balance your inferential goals with your available resources. By the end of the tutorial, you will be able to understand the fundamental concepts behind power analysis and extend them to more advanced statistical models.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 199-200
Author(s):  
Derek Isaacowitz

Abstract Some GSA journals are especially interested in promoting transparency and open science practices, reflecting how some subdisciplines in aging are moving toward open science practices faster than others. In this talk, I will consider the transparency and open science practices that seem most relevant to aging researchers, such as preregistration, open data, open materials and code, sample size justification and analytic tools for considering null effects. I will also discuss potential challenges to implementing these practices as well as reasons why it is important to do so despite these challenges. The focus will be on pragmatic suggestions for researchers planning and conducting studies now that they hope to publish later.


2021 ◽  
Author(s):  
Christopher McCrum ◽  
Jorg van Beek ◽  
Charlotte Schumacher ◽  
Sanne Janssen ◽  
Bas Van Hooren

Background: Context regarding how researchers determine the sample size of their experiments is important for interpreting the results and determining their value and meaning. Between 2018 and 2019, the journal Gait & Posture introduced a requirement for sample size justification in their author guidelines.Research Question: How frequently and in what ways are sample sizes justified in Gait & Posture research articles and was the inclusion of a guideline requiring sample size justification associated with a change in practice?Methods: The guideline was not in place prior to May 2018 and was in place from 25th July 2019. All articles in the three most recent volumes of the journal (84-86) and the three most recent, pre-guideline volumes (60-62) at time of preregistration were included in this analysis. This provided an initial sample of 324 articles (176 pre-guideline and 148 post-guideline). Articles were screened by two authors to extract author data, article metadata and sample size justification data. Specifically, screeners identified if (yes or no) and how sample sizes were justified. Six potential justification types (Measure Entire Population, Resource Constraints, Accuracy, A priori Power Analysis, Heuristics, No Justification) and an additional option of Other/Unsure/Unclear were used.Results: In most cases, authors of Gait & Posture articles did not provide a justification for their study’s sample size. The inclusion of the guideline was associated with a modest increase in the percentage of articles providing a justification (16.6% to 28.1%). A priori power calculations were the dominant type of justification, but many were not reported in enough detail to allow replication.Significance: Gait & Posture researchers should be more transparent in how they determine their sample sizes and carefully consider if they are suitable. Editors and journals may consider adding a similar guideline as a low-resource way to improve sample size justification reporting.


2021 ◽  
Author(s):  
Daniel Lakens

An important step when designing a study is to justify the sample size that will be collected. The key aim of a sample size justification is to explain how the collected data is expected to provide valuable information given the inferential goals of the researcher. In this overview article six approaches are discussed to justify the sample size in a quantitative empirical study: 1) collecting data from (an)almost) the entire population, 2) choosing a sample size based on resource constraints, 3) performing an a-priori power analysis, 4) planning for a desired accuracy, 5) using heuristics, or 6) explicitly acknowledging the absence of a justification. An important question to consider when justifying sample sizes is which effect sizes are deemed interesting, and the extent to which the data that is collected informs inferences about these effect sizes. Depending on the sample size justification chosen, researchers could consider 1) what the smallest effect size of interest is, 2) which minimal effect size will be statistically significant, 3) which effect sizes they expect (and what they base these expectations on), 4) which effect sizes would be rejected based on a confidence interval around the effect size, 5) which ranges of effects a study has sufficient power to detect based on a sensitivity power analysis, and 6) which effect sizes are plausible in a specific research area. Researchers can use the guidelines presented in this article to improve their sample size justification, and hopefully, align the informational value of a study with their inferential goals.


Kidney Cancer ◽  
2020 ◽  
Vol 4 (4) ◽  
pp. 185-195
Author(s):  
Nicola J. Lawrence ◽  
Andrew Martin ◽  
Ian D. Davis ◽  
Simon Troon ◽  
Shomik Sengupta ◽  
...  

BACKGROUND: Little has been published regarding how doctors think and talk about prognosis and the potential benefits of adjuvant therapy. OBJECTIVE: We sought predictions of survival rates and survival times, for patients with and without adjuvant therapy, from the clinicians of patients participating in a randomised trial of adjuvant sorafenib after nephrectomy for renal cell carcinoma. METHODS: A subset of medical oncologists and urologists in the SORCE trial completed questionnaires eliciting their predictions of survival rates and survival times, with and without adjuvant sorafenib, for each of their participating patients. To compare predictions elicited as survival times versus survival rates, we transformed survival times to survival rates. To compare predicted benefits elicited as absolute improvements in rates and times, we transformed them into hazard ratios (HR), a measure of relative benefit.We postulated that a plausible benefit in overall survival (OS) should be smaller than that hypothesized for disease–free survival (DFS) in the trials original sample size justification (i.e. HR for OS should be ≥ 0.75). RESULTS: Sixty–one medical oncologists and 17 urologists completed questionnaires on 216 patients between 2007 and 2013. Predictions of survival without adjuvant sorafenib were similar whether elicited as survival rates or survival times (median 5–year survival rate of 61% vs 60%, p = 0.6). Predicted benefits of sorafenib were larger when elicited as improvements in survival rates than survival times (median HR 0.76 vs 0.83, p < 0.0001). The proportion of HR for predicted OS with sorafenib that reflected a plausible benefit (smaller effect of sorafenib on OS than hypothesized on DFS, i.e. HR ≥ 0.75) was 51% for survival rates, and 65% for survival times. CONCLUSIONS: The predicted benefits of adjuvant sorafenib were larger when elicited as improvements in survival rates than as survival times, and were often larger than the sample size justification for the trial. These potential biases should be considered when thinking and talking about individual patients in clinical practice, and when designing clinical trials.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 858-858
Author(s):  
Derek Isaacowitz

Abstract Some GSA journals are especially interested in promoting transparency and open science practices, reflecting how some subdisciplines in aging are moving toward open science practices faster than others. In this talk, I will consider the transparency and open science practices that seem most relevant to aging researchers, such as preregistration, open data, open materials and code, sample size justification and analytic tools for considering null effects. I will also discuss potential challenges to implementing these practices as well as reasons why it is important to do so despite these challenges. The focus will be on pragmatic suggestions for researchers planning and conducting studies now that they hope to publish later.


2019 ◽  
Vol 58 (1) ◽  
pp. 3-10 ◽  
Author(s):  
Maria Olsen ◽  
Mona Ghannad ◽  
Christianne Lok ◽  
Patrick M. Bossuyt

Abstract Background Shortcomings in study design have been hinted at as one of the possible causes of failures in the translation of discovered biomarkers into the care of ovarian cancer patients, but systematic assessments of biomarker studies are scarce. We aimed to document study design features of recently reported evaluations of biomarkers in ovarian cancer. Methods We performed a systematic search in PubMed (MEDLINE) for reports of studies evaluating the clinical performance of putative biomarkers in ovarian cancer. We extracted data on study designs and characteristics. Results Our search resulted in 1026 studies; 329 (32%) were found eligible after screening, of which we evaluated the first 200. Of these, 93 (47%) were single center studies. Few studies reported eligibility criteria (17%), sampling methods (10%) or a sample size justification or power calculation (3%). Studies often used disjoint groups of patients, sometimes with extreme phenotypic contrasts; 46 studies included healthy controls (23%), but only five (3%) had exclusively included advanced stage cases. Conclusions Our findings confirm the presence of suboptimal features in clinical evaluations of ovarian cancer biomarkers. This may lead to premature claims about the clinical value of these markers or, alternatively, the risk of discarding potential biomarkers that are urgently needed.


2018 ◽  
Vol 31 (3) ◽  
pp. e100011
Author(s):  
Hongyue Wang ◽  
Bokai Wang ◽  
Xin M Tu ◽  
Jinyuan Liu ◽  
Changyong Feng

Sample size justification is a very crucial part in the design of clinical trials. In this paper, the authors derive a new formula to calculate the sample size for a binary outcome given one of the three popular indices of risk difference. The sample size based on the absolute difference is the fundamental one, which can be easily used to derive sample size given the risk ratio or OR.


1998 ◽  
Vol 17 (2) ◽  
pp. 63-66 ◽  
Author(s):  
DeJuran Richardson ◽  
Sue Leurgans

Sign in / Sign up

Export Citation Format

Share Document