The Sumatriptan/Naratriptan Aggregated Patient (SNAP) Database: Aggregation, Validation and Application

Cephalalgia ◽  
2004 ◽  
Vol 24 (7) ◽  
pp. 586-595 ◽  
Author(s):  
C Barrows ◽  
W Saunders ◽  
R Austin ◽  
G Putnam ◽  
H Mansbach ◽  
...  

Pooled data from multiple clinical trials can provide information for medical decision-making that typically cannot be derived from a single clinical trial. By increasing the sample size beyond that achievable in a single clinical trial, pooling individual-patient data from multiple trials provides additional statistical power to detect possible effects of study medication, confers the ability to detect rare outcomes, and facilitates evaluation of effects among subsets of patients. Data from pharmaceutical company-sponsored clinical trials lend themselves to data-pooling, meta-analysis, and data mining initiatives. Pharmaceutical company-sponsored clinical trials are arguably among the most rigorously designed and conducted of studies involving human subjects as a result of multidisciplinary collaboration involving clinical, academic and/or governmental investigators as well as the input and review of medical institutional bodies and regulatory authorities. This paper describes the aggregation, validation and initial analysis of data from the sumatriptan/naratriptan aggregate patient (SNAP) database, which to date comprises pooled individual-patient data from 128 clinical trials conducted from 1987 to 1998 with the migraine medications sumatriptan and naratriptan. With an extremely large sample size (>28000 migraineurs, >140000 treated migraine attacks), the SNAP database allows exploration of questions about migraine and the efficacy and safety of migraine medications that cannot be answered in single clinical trials enrolling smaller numbers of patients. Besides providing the adequate sample size to address specific questions, the SNAP database allows for subgroup analyses that are not possible in individual trial analyses due to small sample size. The SNAP database exemplifies how the wealth of data from pharmaceutical company-sponsored clinical trials can be re-used to continue to provide benefit.

2021 ◽  
pp. bmjebm-2020-111603
Author(s):  
John Ferguson

Commonly accepted statistical advice dictates that large-sample size and highly powered clinical trials generate more reliable evidence than trials with smaller sample sizes. This advice is generally sound: treatment effect estimates from larger trials tend to be more accurate, as witnessed by tighter confidence intervals in addition to reduced publication biases. Consider then two clinical trials testing the same treatment which result in the same p values, the trials being identical apart from differences in sample size. Assuming statistical significance, one might at first suspect that the larger trial offers stronger evidence that the treatment in question is truly effective. Yet, often precisely the opposite will be true. Here, we illustrate and explain this somewhat counterintuitive result and suggest some ramifications regarding interpretation and analysis of clinical trial results.


1990 ◽  
Vol 29 (03) ◽  
pp. 243-246 ◽  
Author(s):  
M. A. A. Moussa

AbstractVarious approaches are considered for adjustment of clinical trial size for patient noncompliance. Such approaches either model the effect of noncompliance through comparison of two survival distributions or two simple proportions. Models that allow for variation of noncompliance and event rates between time intervals are also considered. The approach that models the noncompliance adjustment on the basis of survival functions is conservative and hence requires larger sample size. The model to be selected for noncompliance adjustment depends upon available estimates of noncompliance and event rate patterns.


2021 ◽  
Vol 14 ◽  
pp. 175628642097591
Author(s):  
Thomas F. Scott ◽  
Ray Su ◽  
Kuangnan Xiong ◽  
Arman Altincatal ◽  
Carmen Castrillo-Viguera ◽  
...  

Background: Peginterferon beta-1a and glatiramer acetate (GA) are approved first-line therapies for the treatment of relapsing forms of multiple sclerosis, but their therapeutic efficacy has not been compared directly. Methods: Clinical outcomes at 2 years, including no evidence of disease activity (NEDA), for patients receiving peginterferon beta-1a 125 mcg every 2 weeks (Q2W) or GA 20 mg/ml once daily (QD) were compared by propensity score matching analysis using individual patient data from ADVANCE and CONFIRM phase III clinical trials. In addition, clinical outcomes at 1–3 years for patients receiving peginterferon beta-1a Q2W or GA 40 mg/ml three times a week (TIW) were evaluated using a matching-adjusted comparison analysis of individual patient data from ADVANCE and the ADVANCE extension study, ATTAIN, and aggregate patient data from the phase III GALA and the GALA extension studies. Results: Propensity-score-matched peginterferon beta-1a patients ( n = 336) had a significantly lower annualized relapse rate [ARR (0.204 versus 0.282); rate ratio = 0.724; p = 0.045], a significantly lower probability of 12-week confirmed disability worsening (10.0% versus 14.6%; hazard ratio = 0.625; p = 0.048), and a significantly higher rate of NEDA (20.3% versus 11.5%; p = 0.047) compared with GA 20 mg/ml QD patients after 2 years of treatment. Matching-adjusted peginterferon beta-1a patients (effective n = 276) demonstrated a similar ARR at 1 year (0.278 versus 0.318; p = 0.375) and significantly lower ARR at 2 years (0.0901 versus 0.203; p = 0.032) and 3 years (0.109 versus 0.209; p = 0.047) compared with GA 40 mg/ml TIW patients ( n = 834). Conclusion: Results from separate matching comparisons of phase III clinical trials and extension studies suggest that peginterferon beta-1a 125 mcg Q2W may provide better clinical outcomes than GA (20 mg/ml QD or 40 mg/ml TIW).


1990 ◽  
Vol 47 (1) ◽  
pp. 2-15 ◽  
Author(s):  
Randall M. Peterman

Ninety-eight percent of recently surveyed papers in fisheries and aquatic sciences that did not reject some null hypothesis (H0) failed to report β, the probability of making a type II error (not rejecting H0 when it should have been), or statistical power (1 – β). However, 52% of those papers drew conclusions as if H0 were true. A false H0 could have been missed because of a low-power experiment, caused by small sample size or large sampling variability. Costs of type II errors can be large (for example, for cases that fail to detect harmful effects of some industrial effluent or a significant effect of fishing on stock depletion). Past statistical power analyses show that abundance estimation techniques usually have high β and that only large effects are detectable. I review relationships among β, power, detectable effect size, sample size, and sampling variability. I show how statistical power analysis can help interpret past results and improve designs of future experiments, impact assessments, and management regulations. I make recommendations for researchers and decision makers, including routine application of power analysis, more cautious management, and reversal of the burden of proof to put it on industry, not management agencies.


2016 ◽  
Vol 195 (4S) ◽  
Author(s):  
Daniel Spratt ◽  
Yu-Wei Chen ◽  
Brandon Mahal ◽  
Joseph Osborne ◽  
Shuang Zhao ◽  
...  

2020 ◽  
Vol 16 (3) ◽  
pp. 1061-1074 ◽  
Author(s):  
Jörg Franke ◽  
Veronika Valler ◽  
Stefan Brönnimann ◽  
Raphael Neukom ◽  
Fernando Jaume-Santero

Abstract. Differences between paleoclimatic reconstructions are caused by two factors: the method and the input data. While many studies compare methods, we will focus in this study on the consequences of the input data choice in a state-of-the-art Kalman-filter paleoclimate data assimilation approach. We evaluate reconstruction quality in the 20th century based on three collections of tree-ring records: (1) 54 of the best temperature-sensitive tree-ring chronologies chosen by experts; (2) 415 temperature-sensitive tree-ring records chosen less strictly by regional working groups and statistical screening; (3) 2287 tree-ring series that are not screened for climate sensitivity. The three data sets cover the range from small sample size, small spatial coverage and strict screening for temperature sensitivity to large sample size and spatial coverage but no screening. Additionally, we explore a combination of these data sets plus screening methods to improve the reconstruction quality. A large, unscreened collection generally leads to a poor reconstruction skill. A small expert selection of extratropical Northern Hemisphere records allows for a skillful high-latitude temperature reconstruction but cannot be expected to provide information for other regions and other variables. We achieve the best reconstruction skill across all variables and regions by combining all available input data but rejecting records with insignificant climatic information (p value of regression model >0.05) and removing duplicate records. It is important to use a tree-ring proxy system model that includes both major growth limitations, temperature and moisture.


Author(s):  
Nehad J. Ahmed

Aims: This study aims to review the efficacy of chloroquine and hydroxychloroquine to treat coronavirus disease 2019 (COVID-19) associated pneumonia. Methodology: This review includes searching Google scholar for publications about the use of hydroxychloroquinein the treatment of COVID-19 using the words of (Covid-19) AND hydroxychloroquine. Results: Chloroquine and hydroxychloroquine have proven effective in treating coronavirus in China in vitro, but till now only few clinical trials are available and these trials were conducted on a small sample size of the patients. The efficacy of chloroquine and hydroxychloroquine is mainly due to its effect on angiotensin-converting enzyme II (ACE2). Conclusion: The use of chloroquine and hydroxychloroquine could be very promising but more trials are needed that include larger sample size and more data are required about the comparison between chloroquine and hydroxychloroquine with other antivirals.


2020 ◽  
Author(s):  
Santam Chakraborty ◽  
Indranil Mallick ◽  
Hung N Luu ◽  
Tapesh Bhattacharyya ◽  
Arunsingh Moses ◽  
...  

Abstract Introduction The current study was aimed at quantifying the disparity in geographic access to cancer clinical trials in India. Methods We collated data of cancer clinical trials from the clinical trial registry of India (CTRI) and data on state-wise cancer incidence from the Global Burden of Disease Study. The total sample size for each clinical trial was divided by the trial duration to get the sample size per year. This was then divided by the number of states in which accrual was planned to get the sample size per year per state (SSY). For interventional trials investigating a therapy, the SSY was divided by the number of incident cancers in the state to get the SSY per 1,000 incident cancer cases. The SSY data was then mapped to visualise the geographical disparity.Results We identified 181 ongoing studies, of whom 132 were interventional studies. There was a substantial inter-state disparity - with a median SSY of 1.55 per 1000 incident cancer cases (range 0.00 - 296.81 per 1,000 incident cases) for therapeutic interventional studies. Disparities were starker when cancer site-wise SSY was considered. Even in the state with the highest SSY, only 29.7 % of the newly diagnosed cancer cases have an available slot in a therapeutic cancer clinical trial. Disparities in access were also apparent between academic (range: 0.21 - 226.60) and industry-sponsored trials (range: 0.17 - 70.21).Conclusion There are significant geographic disparities in access to cancer clinical trials in India. Future investigations should evaluate the reasons and mitigation approaches for such disparities.


2020 ◽  
Author(s):  
Chia-Lung Shih ◽  
Te-Yu Hung

Abstract Background A small sample size (n < 30 for each treatment group) is usually enrolled to investigate the differences in efficacy between treatments for knee osteoarthritis (OA). The objective of this study was to use simulation for comparing the power of four statistical methods for analysis of small sample size for detecting the differences in efficacy between two treatments for knee OA. Methods A total of 10,000 replicates of 5 sample sizes (n=10, 15, 20, 25, and 30 for each group) were generated based on the previous reported measures of treatment efficacy. Four statistical methods were used to compare the differences in efficacy between treatments, including the two-sample t-test (t-test), the Mann-Whitney U-test (M-W test), the Kolmogorov-Smirnov test (K-S test), and the permutation test (perm-test). Results The bias of simulated parameter means showed a decreased trend with sample size but the CV% of simulated parameter means varied with sample sizes for all parameters. For the largest sample size (n=30), the CV% could achieve a small level (<20%) for almost all parameters but the bias could not. Among the non-parametric tests for analysis of small sample size, the perm-test had the highest statistical power, and its false positive rate was not affected by sample size. However, the power of the perm-test could not achieve a high value (80%) even using the largest sample size (n=30). Conclusion The perm-test is suggested for analysis of small sample size to compare the differences in efficacy between two treatments for knee OA.


Sign in / Sign up

Export Citation Format

Share Document