Sample‐size invariance of LNRE model parameters: Problems and opportunities

1998 ◽  
Vol 5 (3) ◽  
pp. 145-154 ◽  
Author(s):  
R. Harald Baayen ◽  
Fiona J. Tweedie
2013 ◽  
Vol 29 (4) ◽  
pp. 435-442 ◽  
Author(s):  
Seamus Kent ◽  
Andrew Briggs ◽  
Simon Eckermann ◽  
Colin Berry

Objectives: The use of value of information methods to inform trial design has been widely advocated but there have been few empirical applications of these methods and there is little evidence they are widely used in decision making. This study considers the usefulness of value of information models in the context of a real clinical decision problem relating to alternative diagnostic strategies for patients with a recent non-ST elevated myocardial infarction.Methods: A pretrial economic model is constructed to consider the cost-effectiveness of two competing strategies: coronary angiography alone or in conjunction with fractional flow reserve measurement. A closed-form solution to the expected benefits of information is used with optimal sample size estimated for a range of models reflecting increasingly realistic assumptions and alternative decision contexts.Results: Fractional flow reserve measurement is expected to be cost-effective with an incremental cost-effectiveness ratio of GBP 1,621, however, there is considerable uncertainty in this estimate and consequently a large expected value to reducing this uncertainty via a trial. The recommended sample size is strongly affected by the reality of the assumptions of the expected value of information (EVI) model and the decision context.Conclusions: Value of information models can provide a simple and flexible approach to clinical trial design and are more consistent with the constraints and objectives of the healthcare system than traditional frequentist approaches. However, the variation in sample size estimates demonstrates that it is essential that appropriate model parameters and decision contexts are used in their application.


2018 ◽  
Author(s):  
R.L. Harms ◽  
A. Roebroeck

AbstractIn diffusion MRI analysis, advances in biophysical multi-compartment modeling have gained popularity over the conventional Diffusion Tensor Imaging (DTI), because they possess greater specificity in relating the dMRI signal to underlying cellular microstructure. Biophysical multi-compartment models require parameter estimation, typically performed using either Maximum Likelihood Estimation (MLE) or using Monte Carlo Markov Chain (MCMC) sampling. Whereas MLE provides only a point estimate of the fitted model parameters, MCMC recovers the entire posterior distribution of the model parameters given the data, providing additional information such as parameter uncertainty and correlations. MCMC sampling is currently not routinely applied in dMRI microstructure modeling because it requires adjustments and tuning specific to each model, particularly in the choice of proposal distributions, burn-in length, thinning and the number of samples to store. In addition, sampling often takes at least an order of magnitude more time than non-linear optimization. Here we investigate the performance of MCMC algorithm variations over multiple popular diffusion microstructure models to see whether a single well performing variation could be applied efficiently and robustly to many models. Using an efficient GPU-based implementation, we show that run times can be removed as a prohibitive constraint for sampling of diffusion multi-compartment models. Using this implementation, we investigated the effectiveness of different adaptive MCMC algorithms, burn-in, initialization and thinning. Finally we apply the theory of Effective Sample Size to diffusion multi-compartment models as a way of determining a relatively general target for the number of samples needed to characterize parameter distributions for different models and datasets. We conclude that robust and fast sampling is achieved in most diffusion microstructure models with the Adaptive Metropolis-Within-Gibbs (AMWG) algorithm initialized with an MLE point estimate, in which case 100 to 200 samples are sufficient as a burn-in and thinning is mostly unnecessary. As a relatively general target for the number of samples, we recommend a multivariate Effective Sample Size of 2200.


Author(s):  
David Randell ◽  
Elena Zanini ◽  
Michael Vogel ◽  
Kevin Ewans ◽  
Philip Jonathan

Ewans and Jonathan [2008] shows that characteristics of extreme storm severity in the northern North Sea vary with storm direction. Jonathan et al. [2008] demonstrates, when directional effects are present, that omnidirectional return values should be estimated using a directional extreme value model. Omnidirectional return values so calculated are different in general to those estimated using a model which incorrectly assumes stationarity with respect to direction. The extent of directional variability of extreme storm severity depends on a number of physical factors, including fetch variability. Our ability to assess directional variability of extreme value parameters and return values also improves with increasing sample size in general. In this work, we estimate directional extreme value models for samples of hindcast storm peak significant wave height from locations in ocean basins worldwide, for a range of physical environments, sample sizes and periods of observation. At each location, we compare distributions of omnidirectional 100-year return values estimated using a directional model, to those (incorrectly) estimated assuming stationarity. The directional model for peaks over threshold of storm peak significant wave height is estimated using a non-homogeneous point process model as outlined in Randell et al. [2013]. Directional models for extreme value threshold (using quantile regression), rate of occurrence of threshold exceedances (using a Poisson model), and size of exceedances (using a generalised Pareto model) are estimated. Model parameters are described as smooth functions of direction using periodic B-splines. Parameter estimation is performed using maximum likelihood estimation penalised for parameter roughness. A bootstrap re-sampling procedure, encompassing all inference steps, quantifies uncertainties in, and dependence structure of, parameter estimates and omnidirectional return values.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 22
Author(s):  
Daniel Sanz-Alonso ◽  
Zijian Wang

Importance sampling is used to approximate Bayes’ rule in many computational approaches to Bayesian inverse problems, data assimilation and machine learning. This paper reviews and further investigates the required sample size for importance sampling in terms of the χ2-divergence between target and proposal. We illustrate through examples the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update with importance sampling. Our examples also facilitate a new direct comparison of standard and optimal proposals for particle filtering.


2020 ◽  
Author(s):  
Ode Zulaeha ◽  
Wardani Rahayu ◽  
Yuliatri Sastrawijaya

The purpose of this study is to measure the accuracy of item parameters and abilities by using the Multidimensional Three-Parameter Logistics (M3PL) model. M3PL is a series of tests that measure more than one dimension of ability (θ). Item parameter estimation and the ability to model M3PL are reviewed based on a sample size of 1000 and test lengths of 15, 25, and 40. Parameter estimations are obtained using the Wingen software that is converted to BILOG. The results show that the estimate obtained with a test length of 15 displays a median correlation of 0.787 (high). The study therefore concludes that the level of difficulty of the questions is higher or the questions given to respondents are more difficult, so many respondents guessed the answers. The results of the estimated grain parameters and capabilities indicated that scoring based on sample size greatly affects the stability of the test length. By using the M3PL model, parameters can be measured pseudo-guessing, parameters b and parameters a. MIRT is able to explain interactions between the items on the test and the answers of the participants. The estimated results of the item parameters and the ability parameters of the participants also proved to be accurate and efficient. Keywords: Multidimensional Three-Parameter Logistics (M3PL), distribution parameter, test length


2014 ◽  
Vol 11 (3) ◽  
pp. 2555-2582 ◽  
Author(s):  
S. Pande ◽  
L. Arkesteijn ◽  
H. H. G. Savenije ◽  
L. A. Bastidas

Abstract. This paper presents evidence that model prediction uncertainty does not necessarily rise with parameter dimensionality (the number of parameters). Here by prediction we mean future simulation of a variable of interest conditioned on certain future values of input variables. We utilize a relationship between prediction uncertainty, sample size and model complexity based on Vapnik–Chervonenkis (VC) generalization theory. It suggests that models with higher complexity tend to have higher prediction uncertainty for limited sample size. However, model complexity is not necessarily related to the number of parameters. Here by limited sample size we mean a sample size that is limited in representing the dynamics of the underlying processes. Based on VC theory, we demonstrate that model complexity crucially depends on the magnitude of model parameters. We do this by using two model structures, SAC-SMA and its simplification, SIXPAR, and 5 MOPEX basin data sets across the United States. We conclude that parsimonious model selection based on parameter dimensionality may lead to a less informed model choice.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253349
Author(s):  
Ana C. Guedes ◽  
Francisco Cribari-Neto ◽  
Patrícia L. Espinheira

Beta regressions are commonly used with responses that assume values in the standard unit interval, such as rates, proportions and concentration indices. Hypothesis testing inferences on the model parameters are typically performed using the likelihood ratio test. It delivers accurate inferences when the sample size is large, but can otherwise lead to unreliable conclusions. It is thus important to develop alternative tests with superior finite sample behavior. We derive the Bartlett correction to the likelihood ratio test under the more general formulation of the beta regression model, i.e. under varying precision. The model contains two submodels, one for the mean response and a separate one for the precision parameter. Our interest lies in performing testing inferences on the parameters that index both submodels. We use three Bartlett-corrected likelihood ratio test statistics that are expected to yield superior performance when the sample size is small. We present Monte Carlo simulation evidence on the finite sample behavior of the Bartlett-corrected tests relative to the standard likelihood ratio test and to two improved tests that are based on an alternative approach. The numerical evidence shows that one of the Bartlett-corrected typically delivers accurate inferences even when the sample is quite small. An empirical application related to behavioral biometrics is presented and discussed.


2021 ◽  
pp. 001316442110032
Author(s):  
Mikyung Sim ◽  
Su-Young Kim ◽  
Youngsuk Suh

Mediation models have been widely used in many disciplines to better understand the underlying processes between independent and dependent variables. Despite their popularity and importance, the appropriate sample sizes for estimating those models are not well known. Although several approaches (such as Monte Carlo methods) exist, applied researchers tend to use insufficient sample sizes to estimate their models of interest, which might result in unstable and inaccurate estimation of the model parameters including mediation effects. In the present study, sample size requirements were investigated for four frequently used mediation models: one simple mediation model and three complex mediation models. For each model, path and structural equation modeling approaches were examined, and partial and complete mediation conditions were considered. Both the percentile bootstrap method and the multivariate delta method were compared for testing mediation effects. A series of Monte Carlo simulations was conducted under various simulation conditions, including those concerning the level of effect sizes, the number of indicators, the magnitude of factor loadings, and the proportion of missing data. The results not only present practical and general guidelines for substantive researchers to determine minimum required sample sizes but also improve understanding of which factors are related to sample size requirements in mediation models.


Sign in / Sign up

Export Citation Format

Share Document