scholarly journals Strength of nonhuman primate studies of developmental programming: review of sample sizes, challenges, and steps for future work

2019 ◽  
Vol 11 (3) ◽  
pp. 297-306 ◽  
Author(s):  
Hillary F. Huber ◽  
Susan L. Jenkins ◽  
Cun Li ◽  
Peter W. Nathanielsz

AbstractNonhuman primate (NHP) studies are crucial to biomedical research. NHPs are the species most similar to humans in lifespan, body size, and hormonal profiles. Planning research requires statistical power evaluation, which is difficult to perform when lacking directly relevant preliminary data. This is especially true for NHP developmental programming studies, which are scarce. We review the sample sizes reported, challenges, areas needing further work, and goals of NHP maternal nutritional programming studies. The literature search included 27 keywords, for example, maternal obesity, intrauterine growth restriction, maternal high-fat diet, and maternal nutrient reduction. Only fetal and postnatal offspring studies involving tissue collection or imaging were included. Twenty-eight studies investigated maternal over-nutrition and 33 under-nutrition; 23 involved macaques and 38 baboons. Analysis by sex was performed in 19; minimum group size ranged from 1 to 8 (mean 4.7 ± 0.52, median 4, mode 3) and maximum group size from 3 to 16 (8.3 ± 0.93, 8, 8). Sexes were pooled in 42 studies; minimum group size ranged from 2 to 16 (mean 5.3 ± 0.35, median 6, mode 6) and maximum group size from 4 to 26 (10.2 ± 0.92, 8, 8). A typical study with sex-based analyses had group size minimum 4 and maximum 8 per sex. Among studies with sexes pooled, minimum group size averaged 6 and maximum 8. All studies reported some significant differences between groups. Therefore, studies with group sizes 3–8 can detect significance between groups. To address deficiencies in the literature, goals include increasing age range, more frequently considering sex as a biological variable, expanding topics, replicating studies, exploring intergenerational effects, and examining interventions.

2017 ◽  
Vol 34 (8) ◽  
pp. 1343-1351 ◽  
Author(s):  
Rosaiah K. ◽  
Srinivasa Rao Gadde ◽  
Kalyani K. ◽  
Sivakumar D.C.U.

Purpose The purpose of this paper is to develop a group acceptance sampling plan (GASP) for a resubmitted lot when the lifetime of a product follows odds exponential log logistic distribution introduced by Rao and Rao (2014). The parameters of the proposed plan such as minimum group size and acceptance number are determined for a pre-specified consumer’s risk, number of testers and the test termination time. The authors compare the proposed plan with the ordinary GASP, and the results are illustrated with live data example. Design/methodology/approach The parameters of the proposed plan such as minimum group size and acceptance number are determined for a pre-specified consumer’s risk, number of testers and the test termination time. Findings The authors determined the group size and acceptance number. Research limitations/implications No specific limitations. Practical implications This methodology can be applicable in industry to study quality control. Social implications This methodology can be applicable in health study. Originality/value The parameters of the proposed plan such as minimum group size and acceptance number are determined for a pre-specified consumer’s risk, number of testers and the test termination time.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2013 ◽  
Vol 41 ◽  
pp. 67-72 ◽  
Author(s):  
G.D. Cappon ◽  
D. Potter ◽  
M.E. Hurtt ◽  
G.F. Weinbauer ◽  
C.M. Luetjens ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Stephanie A. Segovia ◽  
Mark H. Vickers ◽  
Clint Gray ◽  
Clare M. Reynolds

The prevalence of obesity, especially in women of child-bearing age, is a global health concern. In addition to increasing the immediate risk of gestational complications, there is accumulating evidence that maternal obesity also has long-term consequences for the offspring. The concept of developmental programming describes the process in which an environmental stimulus, including altered nutrition, during critical periods of development can program alterations in organogenesis, tissue development, and metabolism, predisposing offspring to obesity and metabolic and cardiovascular disorders in later life. Although the mechanisms underpinning programming of metabolic disorders remain poorly defined, it has become increasingly clear that low-grade inflammation is associated with obesity and its comorbidities. This review will discuss maternal metainflammation as a mediator of programming in insulin sensitive tissues in offspring. Use of nutritional anti-inflammatories in pregnancy including omega 3 fatty acids, resveratrol, curcumin, and taurine may provide beneficial intervention strategies to ameliorate maternal obesity-induced programming.


Author(s):  
Manish Kukreti

Present paper reports population dynamics of Cheer pheasant Catreus wallichii in Pokhari valley, Garhwal Himalaya during January 2019 to December 2019. A total of 405 individuals with 145 groups were recorded. Overall individuals per sighting and group size (3.88±0.51 and 3.40±0.45) were also recorded during the study period respectively. Maximum value of individuals per sighting and group size were recorded in months of July and November (6.13±0.76 and 7.32±0.97), while minimum were recorded in May and April (1.75±0.27 and 1.17±0.26). Seasonal variation was also observed in population and group size. Maximum value of individual per sighting was recorded during the Monsoon season and minimum were recorded in spring season. While maximum and minimum group size were recorded in winter and spring Season.


2019 ◽  
Author(s):  
Peter E Clayson ◽  
Kaylie Amanda Carbine ◽  
Scott Baldwin ◽  
Michael J. Larson

Methodological reporting guidelines for studies of event-related potentials (ERPs) were updated in Psychophysiology in 2014. These guidelines facilitate the communication of key methodological parameters (e.g., preprocessing steps). Failing to report key parameters represents a barrier to replication efforts, and difficultly with replicability increases in the presence of small sample sizes and low statistical power. We assessed whether guidelines are followed and estimated the average sample size and power in recent research. Reporting behavior, sample sizes, and statistical designs were coded for 150 randomly-sampled articles from five high-impact journals that frequently publish ERP research from 2011 to 2017. An average of 63% of guidelines were reported, and reporting behavior was similar across journals, suggesting that gaps in reporting is a shortcoming of the field rather than any specific journal. Publication of the guidelines paper had no impact on reporting behavior, suggesting that editors and peer reviewers are not enforcing these recommendations. The average sample size per group was 21. Statistical power was conservatively estimated as .72-.98 for a large effect size, .35-.73 for a medium effect, and .10-.18 for a small effect. These findings indicate that failing to report key guidelines is ubiquitous and that ERP studies are primarily powered to detect large effects. Such low power and insufficient following of reporting guidelines represent substantial barriers to replication efforts. The methodological transparency and replicability of studies can be improved by the open sharing of processing code and experimental tasks and by a priori sample size calculations to ensure adequately powered studies.


2021 ◽  
Vol 3 (1) ◽  
pp. 61-89
Author(s):  
Stefan Geiß

Abstract This study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. The ensuing procedure is analogous to power calculations for experimental designs. In most widespread sample size/effect size settings, the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, this simulation allows making precise power calculations that can consider the specifics of each study’s context, moving beyond one-size-fits-all recommendations. Studies with low sample sizes and/or low expected effect sizes may need coder agreement above .800 to test a hypothesis with sufficient statistical power. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may suffice. Such calculations can help in both evaluating and in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). I supply equations, easy-to-use tables and R functions to facilitate use of this framework, along with example code as online appendix.


2020 ◽  
Author(s):  
Chia-Lung Shih ◽  
Te-Yu Hung

Abstract Background A small sample size (n < 30 for each treatment group) is usually enrolled to investigate the differences in efficacy between treatments for knee osteoarthritis (OA). The objective of this study was to use simulation for comparing the power of four statistical methods for analysis of small sample size for detecting the differences in efficacy between two treatments for knee OA. Methods A total of 10,000 replicates of 5 sample sizes (n=10, 15, 20, 25, and 30 for each group) were generated based on the previous reported measures of treatment efficacy. Four statistical methods were used to compare the differences in efficacy between treatments, including the two-sample t-test (t-test), the Mann-Whitney U-test (M-W test), the Kolmogorov-Smirnov test (K-S test), and the permutation test (perm-test). Results The bias of simulated parameter means showed a decreased trend with sample size but the CV% of simulated parameter means varied with sample sizes for all parameters. For the largest sample size (n=30), the CV% could achieve a small level (<20%) for almost all parameters but the bias could not. Among the non-parametric tests for analysis of small sample size, the perm-test had the highest statistical power, and its false positive rate was not affected by sample size. However, the power of the perm-test could not achieve a high value (80%) even using the largest sample size (n=30). Conclusion The perm-test is suggested for analysis of small sample size to compare the differences in efficacy between two treatments for knee OA.


2018 ◽  
Vol 12 (3) ◽  
pp. 036016 ◽  
Author(s):  
Andrew C Bishop ◽  
Mark Libardoni ◽  
Ahsan Choudary ◽  
Biswapriya Misra ◽  
Kenneth Lange ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document