Epidemiologic Issues Related to Nasopharyngeal Radium Exposures

1996 ◽  
Vol 115 (5) ◽  
pp. 422-428
Author(s):  
Roy E. Shore

A number of topics are discussed related to the potential for and pitfalls in undertaking epidemiologic studies of the late effects of nasopharyngeal radium irradiation. The available evidence indicates that linear extrapolation of risk estimates from high-dose studies is a reasonable basis for estimating risk from radium exposure or other situations in which the radiation exposures were fairly low and fractionated. Epidemiologic study of populations given nasopharyngeal radium irradiation is worthwhile scientifically if several criteria can be met. It is very Important that any such study has adequate statistical power, which is a function of the doses to the organs of interest and the radiation risk coefficients for those organs, as wed as the available sample size. If the organ doses are low, a prohibitively large sample size would be required. Other problems with low-dose studies include the likelihood of false-positive results when a number of health end points are evaluated and the impact of dose uncertainties, small biases, and confounding factors that make the interpretation uncertain. Cluster studies or studies of self-selected cohorts of irradiated patients are not recommended because of the potential for severe bias with such study designs. The ability to define subgroups of the population who have heightened genetic susceptibility may become a reality in the next few years as genes conferring susceptibility to brain cancers or other head and neck tumors are identified; this scientific advance would have the potential to alter greatly the prospects and approaches of epidemiologic studies.

Dose-Response ◽  
2017 ◽  
Vol 15 (2) ◽  
pp. 155932581771531
Author(s):  
Steven B. Kim ◽  
Nathan Sanders

For many dose–response studies, large samples are not available. Particularly, when the outcome of interest is binary rather than continuous, a large sample size is required to provide evidence for hormesis at low doses. In a small or moderate sample, we can gain statistical power by the use of a parametric model. It is an efficient approach when it is correctly specified, but it can be misleading otherwise. This research is motivated by the fact that data points at high experimental doses have too much contribution in the hypothesis testing when a parametric model is misspecified. In dose–response analyses, to account for model uncertainty and to reduce the impact of model misspecification, averaging multiple models have been widely discussed in the literature. In this article, we propose to average semiparametric models when we test for hormesis at low doses. We show the different characteristics of averaging parametric models and averaging semiparametric models by simulation. We apply the proposed method to real data, and we show that P values from averaged semiparametric models are more credible than P values from averaged parametric methods. When the true dose–response relationship does not follow a parametric assumption, the proposed method can be an alternative robust approach.


Author(s):  
Luh Ade Yumita Handriani ◽  
Sudarsana Arka

This study aims to analyze the impact of the BPNT program on household consumption and consumption patterns of BPNT recipient households in Mengwi District, Badung Regency. This research was conducted in Mengwi District, Badung Regency using a questionnaire distributed to respondents with a large sample size of 96 KPM. This study uses path analysis techniques to analyze the direct effect and Sobel test to analyze the indirect effect. Based on path analysis, the results of the study concluded that the BPNT variable had a positive and significant effect on the consumption of BPNT recipient households in Mengwi District, Badung Regency. The BPNT variable has no effect on the consumption pattern of BPNT recipient households in Mengwi District, Badung Regency. The household consumption variable has a negative and significant effect on the consumption pattern of BPNT recipient households in Mengwi District, Badung Regency. The household consumption variable did mediate the effect of the BPNT Program on the consumption pattern of BPNT recipient households in Mengwi District, Badung Regency


2020 ◽  
Vol 2020 (56) ◽  
pp. 176-187 ◽  
Author(s):  
Ethel S Gilbert ◽  
Mark P Little ◽  
Dale L Preston ◽  
Daniel O Stram

Abstract This article addresses issues relevant to interpreting findings from 26 epidemiologic studies of persons exposed to low-dose radiation. We review the extensive data from both epidemiologic studies of persons exposed at moderate or high doses and from radiobiology that together have firmly established radiation as carcinogenic. We then discuss the use of the linear relative risk model that has been used to describe data from both low- and moderate- or high-dose studies. We consider the effects of dose measurement errors; these can reduce statistical power and lead to underestimation of risks but are very unlikely to bring about a spurious dose response. We estimate statistical power for the low-dose studies under the assumption that true risks of radiation-related cancers are those expected from studies of Japanese atomic bomb survivors. Finally, we discuss the interpretation of confidence intervals and statistical tests and the applicability of the Bradford Hill principles for a causal relationship.


Cephalalgia ◽  
2004 ◽  
Vol 24 (7) ◽  
pp. 586-595 ◽  
Author(s):  
C Barrows ◽  
W Saunders ◽  
R Austin ◽  
G Putnam ◽  
H Mansbach ◽  
...  

Pooled data from multiple clinical trials can provide information for medical decision-making that typically cannot be derived from a single clinical trial. By increasing the sample size beyond that achievable in a single clinical trial, pooling individual-patient data from multiple trials provides additional statistical power to detect possible effects of study medication, confers the ability to detect rare outcomes, and facilitates evaluation of effects among subsets of patients. Data from pharmaceutical company-sponsored clinical trials lend themselves to data-pooling, meta-analysis, and data mining initiatives. Pharmaceutical company-sponsored clinical trials are arguably among the most rigorously designed and conducted of studies involving human subjects as a result of multidisciplinary collaboration involving clinical, academic and/or governmental investigators as well as the input and review of medical institutional bodies and regulatory authorities. This paper describes the aggregation, validation and initial analysis of data from the sumatriptan/naratriptan aggregate patient (SNAP) database, which to date comprises pooled individual-patient data from 128 clinical trials conducted from 1987 to 1998 with the migraine medications sumatriptan and naratriptan. With an extremely large sample size (>28000 migraineurs, >140000 treated migraine attacks), the SNAP database allows exploration of questions about migraine and the efficacy and safety of migraine medications that cannot be answered in single clinical trials enrolling smaller numbers of patients. Besides providing the adequate sample size to address specific questions, the SNAP database allows for subgroup analyses that are not possible in individual trial analyses due to small sample size. The SNAP database exemplifies how the wealth of data from pharmaceutical company-sponsored clinical trials can be re-used to continue to provide benefit.


2019 ◽  
Author(s):  
Maximilien Chaumon ◽  
Aina Puce ◽  
Nathalie George

AbstractStatistical power is key for robust, replicable science. Here, we systematically explored how numbers of trials and subjects affect statistical power in MEG sensor-level data. More specifically, we simulated “experiments” using the MEG resting-state dataset of the Human Connectome Project (HCP). We divided the data in two conditions, injected a dipolar source at a known anatomical location in the “signal condition”, but not in the “noise condition”, and detected significant differences at sensor level with classical paired t-tests across subjects. Group-level detectability of these simulated effects varied drastically with anatomical origin. We thus examined in detail which spatial properties of the sources affected detectability, looking specifically at the distance from closest sensor and orientation of the source, and at the variability of these parameters across subjects. In line with previous single-subject studies, we found that the most detectable effects originate from source locations that are closest to the sensors and oriented tangentially with respect to the head surface. In addition, cross-subject variability in orientation also affected group-level detectability, boosting detection in regions where this variability was small and hindering detection in regions where it was large. Incidentally, we observed a considerable covariation of source position, orientation, and their cross-subject variability in individual brain anatomical space, making it difficult to assess the impact of each of these variables independently of one another. We thus also performed simulations where we controlled spatial properties independently of individual anatomy. These additional simulations confirmed the strong impact of distance and orientation and further showed that orientation variability across subjects affects detectability, whereas position variability does not.Importantly, our study indicates that strict unequivocal recommendations as to the ideal number of trials and subjects for any experiment cannot be realistically provided for neurophysiological studies. Rather, it highlights the importance of considering the spatial constraints underlying expected sources of activity while designing experiments.HighlightsAdequate sample size (number of subjects and trials) is key to robust neuroscienceWe simulated evoked MEG experiments and examined sensor-level detectabilityStatistical power varied by source distance, orientation & between-subject variabilityConsider source detectability at sensor-level when designing MEG studiesSample size for MEG studies? Consider source with lowest expected statistical power


2016 ◽  
Vol 31 (4) ◽  
pp. 1093-1107 ◽  
Author(s):  
Melissa H. Ou ◽  
Mike Charles ◽  
Dan C. Collins

Abstract CPC requires the reforecast-calibrated Global Ensemble Forecast System (GEFS) to support the production of their official 6–10- and 8–14-day temperature and precipitation forecasts. While a large sample size of forecast–observation pairs is desirable to generate the necessary model climatology and variances, and covariances to observations, sampling by reforecasts could be done to use available computing resources most efficiently. A series of experiments was done to assess the impact on calibrated forecast skill of using a smaller sample size than the current available reforecast dataset. This study focuses on the skill of week-2 probabilistic forecasts of the 7-day-mean 2-m temperature and accumulated precipitation. The tercile forecasts are expressed as being below-, near-, and above-normal temperature/median precipitation over the continental United States (CONUS). Calibration statistics were calculated using an ensemble regression technique from 25 yr of daily, 11-member GEFS reforecasts for 1986–2010, which were then used to postprocess the GEFS model forecasts for 2011–13. In assessing the skill of calibrated model output using a reforecast dataset with fewer years and ensemble members, and an ensemble run less frequently than daily, it was determined that reductions in the number of ensemble members to six or fewer and reductions in the frequency of reforecast runs from daily to once a week were achievable with minimal loss of skill. However, reducing the number of years of reforecasts to less than 25 resulted in a greater skill degradation. The loss of skill was statistically significant using only 18 yr of reforecasts from 1993 to 2010 to generate model statistics.


2017 ◽  
Author(s):  
Benjamin O. Turner ◽  
Erick J. Paul ◽  
Michael B. Miller ◽  
Aron K. Barbey

Despite a growing body of research suggesting that task-based functional magnetic resonance imaging (fMRI) studies often suffer from a lack of statistical power due to too-small samples, the proliferation of such underpowered studies continues unabated. Using large independent samples across eleven distinct tasks, we demonstrate the impact of sample size on replicability, assessed at different levels of analysis relevant to fMRI researchers. We find that the degree of replicability for typical sample sizes is modest and that sample sizes much larger than typical (e.g., N = 100) produce results that fall well short of perfectly replicable. Thus, our results join the existing line of work advocating for larger sample sizes. Moreover, because we test sample sizes over a fairly large range and use intuitive metrics of replicability, our hope is that our results are more understandable and convincing to researchers who may have found previous results advocating for larger samples inaccessible.


2021 ◽  
pp. 1-7
Author(s):  
Raphael Schuster ◽  
Tim Kaiser ◽  
Yannik Terhorst ◽  
Eva Maria Messner ◽  
Lucia-Maria Strohmeier ◽  
...  

Abstract Background Sample size planning (SSP) is vital for efficient studies that yield reliable outcomes. Hence, guidelines, emphasize the importance of SSP. The present study investigates the practice of SSP in current trials for depression. Methods Seventy-eight randomized controlled trials published between 2013 and 2017 were examined. Impact of study design (e.g. number of randomized conditions) and study context (e.g. funding) on sample size was analyzed using multiple regression. Results Overall, sample size during pre-registration, during SSP, and in published articles was highly correlated (r's ≥ 0.887). Simultaneously, only 7–18% of explained variance related to study design (p = 0.055–0.155). This proportion increased to 30–42% by adding study context (p = 0.002–0.005). The median sample size was N = 106, with higher numbers for internet interventions (N = 181; p = 0.021) compared to face-to-face therapy. In total, 59% of studies included SSP, with 28% providing basic determinants and 8–10% providing information for comprehensible SSP. Expected effect sizes exhibited a sharp peak at d = 0.5. Depending on the definition, 10.2–20.4% implemented intense assessment to improve statistical power. Conclusions Findings suggest that investigators achieve their determined sample size and pre-registration rates are increasing. During study planning, however, study context appears more important than study design. Study context, therefore, needs to be emphasized in the present discussion, as it can help understand the relatively stable trial numbers of the past decades. Acknowledging this situation, indications exist that digital psychiatry (e.g. Internet interventions or intense assessment) can help to mitigate the challenge of underpowered studies. The article includes a short guide for efficient study planning.


Sign in / Sign up

Export Citation Format

Share Document