scholarly journals Calculating Expected Value of Sample Information Adjusting for Imperfect Implementation

2022 ◽  
pp. 0272989X2110730
Author(s):  
Anna Heath

Background The expected value of sample information (EVSI) calculates the value of collecting additional information through a research study with a given design. However, standard EVSI analyses do not account for the slow and often incomplete implementation of the treatment recommendations that follow research. Thus, standard EVSI analyses do not correctly capture the value of the study. Previous research has developed measures to calculate the research value while adjusting for implementation challenges, but estimating these measures is a challenge. Methods Based on a method that assumes the implementation level is related to the strength of evidence in favor of the treatment, 2 implementation-adjusted EVSI calculation methods are developed. These novel methods circumvent the need for analytical calculations, which were restricted to settings in which normality could be assumed. The first method developed in this article uses computationally demanding nested simulations, based on the definition of the implementation-adjusted EVSI. The second method is based on adapting the moment matching method, a recently developed efficient EVSI computation method, to adjust for imperfect implementation. The implementation-adjusted EVSI is then calculated with the 2 methods across 3 examples. Results The maximum difference between the 2 methods is at most 6% in all examples. The efficient computation method is between 6 and 60 times faster than the nested simulation method in this case study and could be used in practice. Conclusions This article permits the calculation of an implementation-adjusted EVSI using realistic assumptions. The efficient estimation method is accurate and can estimate the implementation-adjusted EVSI in practice. By adapting standard EVSI estimation methods, adjustments for imperfect implementation can be made with the same computational cost as a standard EVSI analysis. Highlights Standard expected value of sample information (EVSI) analyses do not account for the fact that treatment implementation following research is often slow and incomplete, meaning they incorrectly capture the value of the study. Two methods, based on nested Monte Carlo sampling and the moment matching EVSI calculation method, are developed to adjust EVSI calculations for imperfect implementation when the speed and level of the implementation of a new treatment depends on the strength of evidence in favor of the treatment. The 2 methods we develop provide similar estimates for the implementation-adjusted EVSI. Our methods extend current EVSI calculation algorithms and thus require limited additional computational complexity.

2019 ◽  
Vol 39 (4) ◽  
pp. 347-359 ◽  
Author(s):  
Anna Heath ◽  
Ioanna Manolopoulou ◽  
Gianluca Baio

Background. The expected value of sample information (EVSI) determines the economic value of any future study with a specific design aimed at reducing uncertainty about the parameters underlying a health economic model. This has potential as a tool for trial design; the cost and value of different designs could be compared to find the trial with the greatest net benefit. However, despite recent developments, EVSI analysis can be slow, especially when optimizing over a large number of different designs. Methods. This article develops a method to reduce the computation time required to calculate the EVSI across different sample sizes. Our method extends the moment-matching approach to EVSI estimation to optimize over different sample sizes for the underlying trial while retaining a similar computational cost to a single EVSI estimate. This extension calculates the posterior variance of the net monetary benefit across alternative sample sizes and then uses Bayesian nonlinear regression to estimate the EVSI across these sample sizes. Results. A health economic model developed to assess the cost-effectiveness of interventions for chronic pain demonstrates that this EVSI calculation method is fast and accurate for realistic models. This example also highlights how different trial designs can be compared using the EVSI. Conclusion. The proposed estimation method is fast and accurate when calculating the EVSI across different sample sizes. This will allow researchers to realize the potential of using the EVSI to determine an economically optimal trial design for reducing uncertainty in health economic models. Limitations. Our method involves rerunning the health economic model, which can be more computationally expensive than some recent alternatives, especially in complex models.


Econometrics ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 14 ◽  
Author(s):  
David Frazier ◽  
Eric Renault

The standard approach to indirect inference estimation considers that the auxiliary parameters, which carry the identifying information about the structural parameters of interest, are obtained from some recently identified vector of estimating equations. In contrast to this standard interpretation, we demonstrate that the case of overidentified auxiliary parameters is both possible, and, indeed, more commonly encountered than one may initially realize. We then revisit the “moment matching” and “parameter matching” versions of indirect inference in this context and devise efficient estimation strategies in this more general framework. Perhaps surprisingly, we demonstrate that if one were to consider the naive choice of an efficient Generalized Method of Moments (GMM)-based estimator for the auxiliary parameters, the resulting indirect inference estimators would be inefficient. In this general context, we demonstrate that efficient indirect inference estimation actually requires a two-step estimation procedure, whereby the goal of the first step is to obtain an efficient version of the auxiliary model. These two-step estimators are presented both within the context of moment matching and parameter matching.


2020 ◽  
Author(s):  
E. Prabhu Raman ◽  
Thomas J. Paul ◽  
Ryan L. Hayes ◽  
Charles L. Brooks III

<p>Accurate predictions of changes to protein-ligand binding affinity in response to chemical modifications are of utility in small molecule lead optimization. Relative free energy perturbation (FEP) approaches are one of the most widely utilized for this goal, but involve significant computational cost, thus limiting their application to small sets of compounds. Lambda dynamics, also rigorously based on the principles of statistical mechanics, provides a more efficient alternative. In this paper, we describe the development of a workflow to setup, execute, and analyze Multi-Site Lambda Dynamics (MSLD) calculations run on GPUs with CHARMm implemented in BIOVIA Discovery Studio and Pipeline Pilot. The workflow establishes a framework for setting up simulation systems for exploratory screening of modifications to a lead compound, enabling the calculation of relative binding affinities of combinatorial libraries. To validate the workflow, a diverse dataset of congeneric ligands for seven proteins with experimental binding affinity data is examined. A protocol to automatically tailor fit biasing potentials iteratively to flatten the free energy landscape of any MSLD system is developed that enhances sampling and allows for efficient estimation of free energy differences. The protocol is first validated on a large number of ligand subsets that model diverse substituents, which shows accurate and reliable performance. The scalability of the workflow is also tested to screen more than a hundred ligands modeled in a single system, which also resulted in accurate predictions. With a cumulative sampling time of 150ns or less, the method results in average unsigned errors of under 1 kcal/mol in most cases for both small and large combinatorial libraries. For the multi-site systems examined, the method is estimated to be more than an order of magnitude more efficient than contemporary FEP applications. The results thus demonstrate the utility of the presented MSLD workflow to efficiently screen combinatorial libraries and explore chemical space around a lead compound, and thus are of utility in lead optimization.</p>


2021 ◽  
Vol 5 (1) ◽  
pp. 51
Author(s):  
Enriqueta Vercher ◽  
Abel Rubio ◽  
José D. Bermúdez

We present a new forecasting scheme based on the credibility distribution of fuzzy events. This approach allows us to build prediction intervals using the first differences of the time series data. Additionally, the credibility expected value enables us to estimate the k-step-ahead pointwise forecasts. We analyze the coverage of the prediction intervals and the accuracy of pointwise forecasts using different credibility approaches based on the upper differences. The comparative results were obtained working with yearly time series from the M4 Competition. The performance and computational cost of our proposal, compared with automatic forecasting procedures, are presented.


Author(s):  
Tobias Leibner ◽  
Mario Ohlberger

In this contribution we derive and analyze a new numerical method for kinetic equations based on a variable transformation of the moment approximation. Classical minimum-entropy moment closures are a class of reduced models for kinetic equations that conserve many of the fundamental physical properties of solutions. However, their practical use is limited by their high computational cost, as an optimization problem has to be solved for every cell in the space-time grid. In addition, implementation of numerical solvers for these models is hampered by the fact that the optimization problems are only well-defined if the moment vectors stay within the realizable set. For the same reason, further reducing these models by, e.g., reduced-basis methods is not a simple task. Our new method overcomes these disadvantages of classical approaches. The transformation is performed on the semi-discretized level which makes them applicable to a wide range of kinetic schemes and replaces the nonlinear optimization problems by inversion of the positive-definite Hessian matrix. As a result, the new scheme gets rid of the realizability-related problems. Moreover, a discrete entropy law can be enforced by modifying the time stepping scheme. Our numerical experiments demonstrate that our new method is often several times faster than the standard optimization-based scheme.


Sign in / Sign up

Export Citation Format

Share Document