Simulation Model Refinement for Decision Making Via a Value-of-Information Based Metric

Author(s):  
Jitesh H. Panchal ◽  
Christiaan J. J. Paredis ◽  
Janet K. Allen ◽  
Farrokh Mistree

Since no simulation model is perfect, any simulation model for modeling a system’s physical behavior can be refined further. Hence, the question faced by a designer is — “How much refinement of a simulation model is adequate for a particular design problem?” To answer this question, we present a value-of-information based approach for determining the appropriate extent of refinement of simulation models. The value of additional information obtained via refinement of simulation models is measured as the difference between the maximum payoff that could possibly be achieved throughout the design space and the minimum possible payoff at the point in the design space selected using the simple model. The approach is presented using two examples — design of a pressure vessel and the design of a material.

2008 ◽  
Vol 40 (3) ◽  
pp. 223-251 ◽  
Author(s):  
Jitesh H. Panchal ◽  
Christiaan J.J. Paredis ◽  
Janet K. Allen ◽  
Farrokh Mistree

2020 ◽  
Vol 70 (1) ◽  
pp. 54-59
Author(s):  
Zhi Zhu ◽  
Yonglin Lei ◽  
Yifan Zhu

Model-driven engineering has become popular in the combat effectiveness simulation systems engineering during these last years. It allows to systematically develop a simulation model in a composable way. However, implementing a conceptual model is really a complex and costly job if this is not guided under a well-established framework. Hence this study attempts to explore methodologies for engineering the development of simulation models. For this purpose, we define an ontological metamodelling framework. This framework starts with ontology-aware system conceptual descriptions, and then refines and transforms them toward system models until they reach final executable implementations. As a proof of concept, we identify a set of ontology-aware modelling frameworks in combat systems specification, then an underwater targets search scenario is presented as a motivating example for running simulations and results can be used as a reference for decision-making behaviors.


1995 ◽  
Vol 48 (3) ◽  
pp. 425-435 ◽  
Author(s):  
J. Zhao ◽  
W. G. Price ◽  
P. A. Wilson ◽  
M. Tan

It is well known that many collisions occur because one ship turns right whilst the other turns left when in close proximity to one another. Little is known as to why this occurs and, although some simulation models have been established using entropy theory, the problem remains unsolved.In this paper, an assessment model for uncertainty is reviewed briefly. The concepts of uncertainty and uncoordination of mariners' behaviour in collision avoidance are discussed. A simulation model in conjunction with a DCPA (distance to the closest point of approach) decision-making model using fuzzy programming is introduced to discuss coordination.


Author(s):  
J. Michael Dunn ◽  
Amos Golan

In this chapter, we are interested in understanding the nature of information and its value. We focus on information that is used for making decisions, including related activities such as constructing models, performing inferences, and making predictions. Our discussion is mostly qualitative, and it touches on certain aspects of information as related to the sender, receiver, and a possible observer. Although our emphasis is on shedding more light on the concept of information for making decisions, we are not concerned here with the exact details of the decision process, or information processing itself. In addition to discussing information, our expedition takes us through the traditional notions of utility, prices, and risk, all of which, under certain conditions, relate to the value of information. Our main conclusion is that the value of information (used in decision making) is relative and subjective. Since information is relative, it can have more than one value, say a value for the sender, a value for the receiver, or even different values for different senders and receivers, and various values for various “eavesdroppers.” Of course, the value might be zero for any of these. Importantly, that value is inversely related to risk when the information is used for decision making. Although this conclusion is likely expected, we did argue for it in a way that relies on some fundamentals about both value and information.


2021 ◽  
pp. 0272989X2110492
Author(s):  
Aasthaa Bansal ◽  
Patrick J. Heagerty ◽  
Lurdes Y. T. Inoue ◽  
David L. Veenstra ◽  
Charles J. Wolock ◽  
...  

Background Patient surveillance using repeated biomarker measurements presents an opportunity to detect and treat disease progression early. Frequent surveillance testing using biomarkers is recommended and routinely conducted in several diseases, including cancer and diabetes. However, frequent testing involves tradeoffs. Although surveillance tests provide information about current disease status, the complications and costs of frequent tests may not be justified for patients who are at low risk of progression. Predictions based on patients’ earlier biomarker values may be used to inform decision making; however, predictions are uncertain, leading to decision uncertainty. Methods We propose the Personalized Risk-Adaptive Surveillance (PRAISE) framework, a novel method for embedding predictions into a value-of-information (VOI) framework to account for the cost of uncertainty over time and determine the time point at which collection of biomarker data would be most valuable. The proposed sequential decision-making framework is innovative in that it leverages the patient’s longitudinal history, considers individual benefits and harms, and allows for dynamic tailoring of surveillance intervals by considering the uncertainty in current information and estimating the probability that new information may change treatment decisions, as well as the impact of this change on patient outcomes. Results When applied to data from cystic fibrosis patients, PRAISE lowers costs by allowing some patients to skip a visit, compared to an “always test” strategy. It does so without compromising expected survival, by recommending less frequent testing among those who are unlikely to be treated at the skipped time point. Conclusions A VOI-based approach to patient monitoring is feasible and could be applied to several diseases to develop more cost-effective and personalized strategies for ongoing patient care. Highlights In many patient-monitoring settings, the complications and costs of frequent tests are not justified for patients who are at low risk of disease progression. Predictions based on patient history may be used to individualize the timing of patient visits based on evolving risk. We propose Personalized Risk-Adaptive Surveillance (PRAISE), a novel method for personalizing the timing of surveillance testing, where prediction modeling projects the disease trajectory and a value-of-information (VOI)–based pragmatic decision-theoretic framework quantifies patient- and time-specific benefit-harm tradeoffs. A VOI-based approach to patient monitoring could be applied to several diseases to develop more personalized and cost-effective strategies for ongoing patient care.


2014 ◽  
Vol 2014 (1) ◽  
pp. 94-101 ◽  
Author(s):  
Анатолий Якимов ◽  
Anatoliy Yakimov ◽  
Константин Захарченков ◽  
Konstantin Zakharchenkov

The information assessments are developed to determine requirements to software-based simulation model. Software tools are designed to support decision-making control in the economic systems of enterprises. The software tools are focused on operation of simulation models by the makers managerial decisions.


2015 ◽  
Vol 55 (5) ◽  
pp. 291-300 ◽  
Author(s):  
Petr Dlask

This paper reports on change as an indicator that can be provide more focused goals in studies of development. The paper offers an answer to the question: How might management gain information from a simulation model and thus influence reality through pragmatic changes. We focus on where and when to influence, manage, and control basic technical-economic proposals. These proposals are mostly formed as simulation models. Unfortunately, however, they do not always provide an explanation of formation changes. A wide variety of simulation tools have become available, e.g. Simulink, Wolfram SystemModeler, VisSim, SystemBuild, STELLA, Adams, SIMSCRIPT, COMSOL Multiphysics, etc. However, there is only limited support for the construction of simulation models of a technical-economic nature. Mathematics has developed the concept of differentiation. Economics has developed the concept of marginality. Technical-economic design has yet to develop an equivalent methodology. This paper discusses an,alternative approach that uses the phenomenon of change, and provides a way from professional knowledge, which can be seen as a purer kind of information, to a more dynamic computing model (a simulation model that interprets changes as method). The validation of changes, as a result for use in managerial decision making, and condition for managerial decision making, can thus be improved.


Author(s):  
Ayan Sinha ◽  
Jitesh H. Panchal ◽  
Janet K. Allen ◽  
Farrokh Mistree

The motivating question for this article is: ‘How should a system level designer allocate resources for auxiliary simulation model refinement while satisfying system level design objectives and ensuring robust process requirements in multiscale systems? Our approach consists of integrating: (i) a robust design method for multiscale systems (ii) an information economics based approach for quantifying the cost-benefit trade-off for mitigating uncertainty in simulation models. Specifically, the focus is on allocating resources for reducing model parameter uncertainty arising due to insufficient data from simulation models. A comprehensive multiscale design problem, the concurrent design of material and product is used for validation. The multiscale system is simulated with models at multiple length and time scales. The accuracy of the simulated performance is determined by the trade-off between computational cost for model refinement and the benefits of mitigated uncertainty from the refined models. System level designers can efficiently allocate resources for sequential simulation model refinement in multiscale systems using this approach.


Author(s):  
Rasol Murtadha Najah

This article discusses the application of methods to enhance the knowledge of experts to build a decision-making model based on the processing of physical data on the real state of the environment. Environmental parameters determine its ecological state. To carry out research in the field of expert assessment of environmental conditions, the analysis of known works in this field is carried out. The results of the analysis made it possible to justify the relevance of the application of analytical, stochastic models and models based on methods of enhancing the knowledge of experts — experts. It is concluded that the results of using analytical and stochastic objects are inaccurate, due to the complexity and poor mathematical description of the objects. The relevance of developing information support for an expert assessment of environmental conditions is substantiated. The difference of this article is that based on the analysis of the application of expert methods for assessing the state of the environment, a fuzzy logic adoption model and information support for assessing the environmental state of the environment are proposed. The formalization of the parameters of decision-making models using linguistic and fuzzy variables is considered. The formalization of parameters of decision-making models using linguistic and fuzzy variables was considered. The model’s description of fuzzy inference is given. The use of information support for environment state assessment is shown on the example of experts assessing of the land desertification stage.


Author(s):  
Rajesh Dubey ◽  
Udaya K. Chowdary ◽  
Venkateswarlu V.

A controlled release formulation of metoclopramide was developed using a combination of hypromellose (HPMC) and hydrogenated castor oil (HCO). Developed formulations released the drug over 20 hr with release kinetics following Higuchi model. Compared to HCO, HPMC showed significantly higher influence in controlling the drug release at initial as well as later phase. The difference in the influence can be explained by the different swelling and erosion behaviour of the polymers. Effect of the polymers on release was optimized using a face-centered central composite design to generate a predictable design space. Statistical analysis of the drug release at various levels indicated a linear effect of the polymers’ levels on the drug release. The release profile of formulations containing the polymer levels at extremes of their ranges in design space was found to be similar to the predicted release profile


Sign in / Sign up

Export Citation Format

Share Document