scholarly journals Statistical approach for …a test chemical is considered to be positive… in regulatory toxicology: Trend and pairwise tests

2019 ◽  
Author(s):  
Ludwig A. Hothorn

AbstractIn regulatory toxicology an outcome is claimed positive when both a trend is significant and any pairwise test against control. Two statistical approaches are proposed: a joint Dunnett and Williams test (assuming the dose as a qualitative factor) and a joint test of the Tukey regression test and Dunnett test (assuming the dose as a quantitative covariate). Related R software is available.

1994 ◽  
Vol 26 (4) ◽  
pp. 831-854 ◽  
Author(s):  
Jeffrey D. Helterbrand ◽  
Noel Cressie ◽  
Jennifer L. Davidson

In this research, we present a statistical theory, and an algorithm, to identify one-pixel-wide closed object boundaries in gray-scale images. Closed-boundary identification is an important problem because boundaries of objects are major features in images. In spite of this, most statistical approaches to image restoration and texture identification place inappropriate stationary model assumptions on the image domain. One way to characterize the structural components present in images is to identify one-pixel-wide closed boundaries that delineate objects. By defining a prior probability model on the space of one-pixel-wide closed boundary configurations and appropriately specifying transition probability functions on this space, a Markov chain Monte Carlo algorithm is constructed that theoretically converges to a statistically optimal closed boundary estimate. Moreover, this approach ensures that any approximation to the statistically optimal boundary estimate will have the necessary property of closure.


2021 ◽  
Author(s):  
Elizabeth Mutubuki ◽  
Mohamed El Alili ◽  
Judith Bosmans ◽  
Teddy Oosterhuis ◽  
Frank Snoek ◽  
...  

Abstract Background: Baseline imbalances, skewed costs, the correlation between costs and effects, and missing data are statistical challenges that are often not adequately accounted for in the analysis of cost-effectiveness data. This study aims to illustrate the impact of accounting for these statistical challenges in trial-based economic evaluations. Methods: Data from two trial-based economic evaluations, the REALISE and HypoAware studies, were used. In total, 14 full cost-effectiveness analyses were performed per study, in which the four statistical challenges in trial-based economic evaluations were taken into account step-by-step. Statistical approaches were compared in terms of the resulting cost and effect differences, ICERs, and probabilities of cost-effectiveness. Results: In the REALISE study and HypoAware study, the ICER ranged from 636,744€/QALY and 90,989€/QALY when ignoring all statistical challenges to -7,502€/QALY and 46,592€/QALY when accounting for all statistical challenges, respectively. The probabilities of the intervention being cost-effective at 0€/ QALY gained were 0.67 and 0.59 when ignoring all statistical challenges, and 0.54 and 0.27 when all of the statistical challenges were taken into account for the REALISE study and HypoAware study, respectively.Conclusions: Not accounting for baseline imbalances, skewed costs, correlated costs and effects, and missing data in trial-based economic evaluations may notably impact results. Therefore, when conducting trial-based economic evaluations, it is important to align the statistical approach with the identified statistical challenges in cost-effectiveness data. To facilitate researchers in handling statistical challenges in trial-based economic evaluations, software code is provided.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Elizabeth N. Mutubuki ◽  
Mohamed El Alili ◽  
Judith E. Bosmans ◽  
Teddy Oosterhuis ◽  
Frank J. Snoek ◽  
...  

Abstract Background Baseline imbalances, skewed costs, the correlation between costs and effects, and missing data are statistical challenges that are often not adequately accounted for in the analysis of cost-effectiveness data. This study aims to illustrate the impact of accounting for these statistical challenges in trial-based economic evaluations. Methods Data from two trial-based economic evaluations, the REALISE and HypoAware studies, were used. In total, 14 full cost-effectiveness analyses were performed per study, in which the four statistical challenges in trial-based economic evaluations were taken into account step-by-step. Statistical approaches were compared in terms of the resulting cost and effect differences, ICERs, and probabilities of cost-effectiveness. Results In the REALISE study and HypoAware study, the ICER ranged from 636,744€/QALY and 90,989€/QALY when ignoring all statistical challenges to − 7502€/QALY and 46,592€/QALY when accounting for all statistical challenges, respectively. The probabilities of the intervention being cost-effective at 0€/ QALY gained were 0.67 and 0.59 when ignoring all statistical challenges, and 0.54 and 0.27 when all of the statistical challenges were taken into account for the REALISE study and HypoAware study, respectively. Conclusions Not accounting for baseline imbalances, skewed costs, correlated costs and effects, and missing data in trial-based economic evaluations may notably impact results. Therefore, when conducting trial-based economic evaluations, it is important to align the statistical approach with the identified statistical challenges in cost-effectiveness data. To facilitate researchers in handling statistical challenges in trial-based economic evaluations, software code is provided.


1994 ◽  
Vol 26 (04) ◽  
pp. 831-854 ◽  
Author(s):  
Jeffrey D. Helterbrand ◽  
Noel Cressie ◽  
Jennifer L. Davidson

In this research, we present a statistical theory, and an algorithm, to identify one-pixel-wide closed object boundaries in gray-scale images. Closed-boundary identification is an important problem because boundaries of objects are major features in images. In spite of this, most statistical approaches to image restoration and texture identification place inappropriate stationary model assumptions on the image domain. One way to characterize the structural components present in images is to identify one-pixel-wide closed boundaries that delineate objects. By defining a prior probability model on the space of one-pixel-wide closed boundary configurations and appropriately specifying transition probability functions on this space, a Markov chain Monte Carlo algorithm is constructed that theoretically converges to a statistically optimal closed boundary estimate. Moreover, this approach ensures that any approximation to the statistically optimal boundary estimate will have the necessary property of closure.


Kardiologiia ◽  
2020 ◽  
Vol 60 (10) ◽  
pp. 107-112
Author(s):  
F. T. Ageev ◽  
E. B. Yarovaya

The article compares two statistical approaches, which are commonly used in current comparative studies, a hypothesis that a drug is superior over another one (superiority) and a hypothesis that a drug is not inferior to another one in the efficacy and safety (non-inferiority). Using the example of specific studies, the difference between the methods and the tasks, for the solution of which one or another method should be applied, are shown. In order to prove the superiority in efficacy and safety of a new drug over an existing one, only a statistical approach that uses the “superiority” hypothesis is applicable. Studies using the “non-inferiority” hypothesis are generally used for comparing drugs, which are not considerably different in their efficacy, but the study drug has other advantages in the administration, storage, tolerability etc. The choice of statistical method is determined exclusively by the task of the study.


BMJ Open ◽  
2018 ◽  
Vol 8 (10) ◽  
pp. e022626 ◽  
Author(s):  
Johann Windt ◽  
Clare L Ardern ◽  
Tim J Gabbett ◽  
Karim M Khan ◽  
Chad E Cook ◽  
...  

ObjectivesTo systematically identify and qualitatively review the statistical approaches used in prospective cohort studies of team sports that reported intensive longitudinal data (ILD) (>20 observations per athlete) and examined the relationship between athletic workloads and injuries. Since longitudinal research can be improved by aligning the (1) theoretical model, (2) temporal design and (3) statistical approach, we reviewed the statistical approaches used in these studies to evaluate how closely they aligned these three components.DesignMethodological review.MethodsAfter finding 6 systematic reviews and 1 consensus statement in our systematic search, we extracted 34 original prospective cohort studies of team sports that reported ILD (>20 observations per athlete) and examined the relationship between athletic workloads and injuries. Using Professor Linda Collins’ three-part framework of aligning the theoretical model, temporal design and statistical approach, we qualitatively assessed how well the statistical approaches aligned with the intensive longitudinal nature of the data, and with the underlying theoretical model. Finally, we discussed the implications of each statistical approach and provide recommendations for future research.ResultsStatistical methods such as correlations, t-tests and simple linear/logistic regression were commonly used. However, these methods did not adequately address the (1) themes of theoretical models underlying workloads and injury, nor the (2) temporal design challenges (ILD). Although time-to-event analyses (eg, Cox proportional hazards and frailty models) and multilevel modelling are better-suited for ILD, these were used in fewer than a 10% of the studies (n=3).ConclusionsRapidly accelerating availability of ILD is the norm in many fields of healthcare delivery and thus health research. These data present an opportunity to better address research questions, especially when appropriate statistical analyses are chosen.


2021 ◽  
Author(s):  
Fabian Thomas ◽  
Adam Shehata ◽  
Lukas P Otto ◽  
Judith Möller ◽  
Elisabeth Prestele

Abstract Choosing an appropriate statistical model to analyze reciprocal relations between individuals’ attitudes, beliefs, or behaviors over time can be challenging. Often, decisions for or against specific models are rather implicit and it remains unclear whether the statistical approach fits the theory of interest. For longitudinal models, this is problematic since within- and between-person processes can be confounded leading to wrong conclusions. Taking the perspective of the reinforcing spirals model (RSM) focusing on media effects and selection, we compare six statistical models that were recently used to analyze the RSM and show their ability to separate within- and between-person components. Using empirical data capturing respondents’ development during adolescence, we show that results vary across statistical models. Further, Monte Carlo simulations indicate that some approaches might lead to wrong conclusions if specific communication dynamics are present. In sum, we recommend using approaches that explicitly model and clearly separate within- and between-person effects.


2021 ◽  
Author(s):  
Nirmal Kumar Sampathkumar ◽  
Venkat Krishnan Sundaram ◽  
Prakroothi S Danthi ◽  
Rasha Barakat ◽  
Shiden Solomon ◽  
...  

AbstractAssessment of differential gene expression by qPCR is heavily influenced by the choice of reference genes. Although numerous statistical approaches have been proposed to determine the best reference genes, they can give rise to conflicting results depending on experimental conditions. Hence, recent studies propose the use of RNA-Seq to identify stable genes followed by the application of different statistical approaches to determine the best set of reference genes for qPCR data normalization. In this study, we demonstrate that the statistical approach to determine the best reference genes from randomly selected candidates is more important than the preselection of ‘stable’ candidates from RNA-Seq data. Using a qPCR data normalization workflow that we have previously established; we show that qPCR data normalization using randomly chosen conventional reference genes renders the same results as stable reference genes selected from RNA-Seq data. We validated these observations in two distinct cross-sectional experimental conditions involving human iPSC derived microglial cells and mouse sciatic nerves. These results taken together show that given a robust statistical approach for reference gene selection, stable genes selected from RNA-Seq data do not offer any significant advantage over commonly used reference genes for normalizing qPCR assays.


2006 ◽  
Vol 519-521 ◽  
pp. 1-10 ◽  
Author(s):  
Anthony D. Rollett ◽  
Robert Campman ◽  
David Saylor

This paper describes some aspects of reconstruction of microstructures in three dimensions. A distinction is drawn between tomographic approaches that seek to characterize specific volumes of material, either with or without diffraction, and statistical approaches that focus on particular aspects of microstructure. A specific example of the application of the statistical approach is given for an aerospace aluminum alloy in which the distributions of coarse constituent particles are modeled. Such distributions are useful for modeling fatigue crack initiation and propagation.


1981 ◽  
Vol 5 ◽  
pp. 115-118
Author(s):  
I. McDonald

There is a belief among research workers that their experiments have two possible outcomes, either they are successful or else the results will have to be taken to a statistician. Although the belief is not universal, it does account for a certain amount of brooding by statisticians on their place in scientific research. This may be why I have made the title I was given, the excuse for examining some of my own activities and attitudes in bringing agricultural research data into juxtaposition with mathematical models. I apologize in advance for the egocentricity.Statisticians in agricultural research find considerable employment in enabling their customers to adorn papers with a sufficiency of standard errors and significance tests for the satisfaction of editorial boards. Underlying this apparently cosmetic activity is the important function of testing the logic of experimental conclusions and preventing their too easy or too general acceptance. The statistical approach to model building is therefore that of a strength tester. Theoretical assumptions may be vital for the construction of the model but statistical appraisal must be empirical, testing the correspondence between the model and these aspects of experience which it is intended to describe. On the basis of these tests, some parts or some uses of the model may be rejected as unsound and other parts or uses may be accepted, with varying degrees of confidence, as being reliable. Warning notices against undue dependence may be posted on these parts which cannot be tested for lack of data.


Sign in / Sign up

Export Citation Format

Share Document