scholarly journals Data set for noble gas plume exposure model validation

1975 ◽  
Author(s):  
C. Gogolak
Author(s):  
Wilfried Mirschel ◽  
Karl-Otto Wenkel ◽  
Martin Wegehenkel ◽  
Kurt Christian Kersebaum ◽  
Uwe Schindler ◽  
...  

Mathematics ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. 1786 ◽  
Author(s):  
A. M. Abd El-Raheem ◽  
M. H. Abu-Moussa ◽  
Marwa M. Mohie El-Din ◽  
E. H. Hafez

In this article, a progressive-stress accelerated life test (ALT) that is based on progressive type-II censoring is studied. The cumulative exposure model is used when the lifetime of test units follows Pareto-IV distribution. Different estimates as the maximum likelihood estimates (MLEs) and Bayes estimates (BEs) for the model parameters are discussed. Bayesian estimates are derived while using the Tierney and Kadane (TK) approximation method and the importance sampling method. The asymptotic and bootstrap confidence intervals (CIs) of the parameters are constructed. A real data set is analyzed in order to clarify the methods proposed through this paper. Two types of the progressive-stress tests, the simple ramp-stress test and multiple ramp-stress test, are compared through the simulation study. Finally, some interesting conclusions are drawn.


2012 ◽  
Vol 12 (22) ◽  
pp. 10759-10769 ◽  
Author(s):  
N. I. Kristiansen ◽  
A. Stohl ◽  
G. Wotawa

Abstract. Caesium-137 (137Cs) and iodine-131 (131I) are radionuclides of particular concern during nuclear accidents, because they are emitted in large amounts and are of significant health impact. 137Cs and 131I attach to the ambient accumulation-mode (AM) aerosols and share their fate as the aerosols are removed from the atmosphere by scavenging within clouds, precipitation and dry deposition. Here, we estimate their removal times from the atmosphere using a unique high-precision global measurement data set collected over several months after the accident at the Fukushima Dai-ichi nuclear power plant in March 2011. The noble gas xenon-133 (133Xe), also released during the accident, served as a passive tracer of air mass transport for determining the removal times of 137Cs and 131I via the decrease in the measured ratios 137Cs/133Xe and 131I/133Xe over time. After correction for radioactive decay, the 137Cs/133Xe ratios reflect the removal of aerosols by wet and dry deposition, whereas the 131I/133Xe ratios are also influenced by aerosol production from gaseous 131I. We find removal times for 137Cs of 10.0–13.9 days and for 131I of 17.1–24.2 days during April and May 2011. The removal time of 131I is longer due to the aerosol production from gaseous 131I, thus the removal time for 137Cs serves as a better estimate for aerosol lifetime. The removal time of 131I is of interest for semi-volatile species. We discuss possible caveats (e.g. late emissions, resuspension) that can affect the results, and compare the 137Cs removal times with observation-based and modeled aerosol lifetimes. Our 137Cs removal time of 10.0–13.9 days should be representative of a "background" AM aerosol well mixed in the extratropical Northern Hemisphere troposphere. It is expected that the lifetime of this vertically mixed background aerosol is longer than the lifetime of fresh AM aerosols directly emitted from surface sources. However, the substantial difference to the mean lifetimes of AM aerosols obtained from aerosol models, typically in the range of 3–7 days, warrants further research on the cause of this discrepancy. Too short modeled AM aerosol lifetimes would have serious implications for air quality and climate model predictions.


2004 ◽  
Vol 65 (3) ◽  
pp. 273-288
Author(s):  
Dimosthenis Anagnostopoulos ◽  
Vassilis Dalakas ◽  
Mara Nikolaidou

2006 ◽  
Vol 4 (1) ◽  
pp. 97
Author(s):  
Alan Cosme Rodrigues da Silva ◽  
Claudio Henrique Da Silveira Barbedo ◽  
Gustavo Silva Araújo ◽  
Myrian Beatriz Eiras das Neves

The purpose of this paper is to analyze backtesting methodologies of VaR, focusing on aspects as suitability to volatile markets and limited data set. We verify, from regulatory standpoint, tests to complement the Basel traffic light results, using simulated and real data. The results indicate that tests based on failures proportion are not adequate for small samples even fro 1,000 observations. The Basel criterion is conservative and has low power, which does not invalidate its application, as the criterion is only one of the procedures adopted in internal model validation process. Thus, it is suggested using tests that capture the shape of returns distribution, as the Kuiper test, in addition to the Basel criterion.


AIChE Journal ◽  
2019 ◽  
Vol 66 (2) ◽  
Author(s):  
Xi Gao ◽  
Jia Yu ◽  
Cheng Li ◽  
Rupen Panday ◽  
Yupeng Xu ◽  
...  

2020 ◽  
pp. 103118
Author(s):  
Steven Deere ◽  
Hui Xie ◽  
Edwin R. Galea ◽  
David Cooney ◽  
Peter J. Lawrence

2018 ◽  
Author(s):  
David J. Warne ◽  
Ruth E. Baker ◽  
Matthew J. Simpson

AbstractReaction–diffusion models describing the movement, reproduction and death of individuals within a population are key mathematical modelling tools with widespread applications in mathematical biology. A diverse range of such continuum models have been applied in various biological contexts by choosing different flux and source terms in the reaction–diffusion framework. For example, to describe collective spreading of cell populations, the flux term may be chosen to reflect various movement mechanisms, such as random motion (diffusion), adhesion, haptotaxis, chemokinesis and chemotaxis. The choice of flux terms in specific applications, such as wound healing, is usually made heuristically, and rarely is it tested quantitatively against detailed cell density data. More generally, in mathematical biology, the questions of model validation and model selection have not received the same attention as the questions of model development and model analysis. Many studies do not consider model validation or model selection, and those that do often base the selection of the model on residual error criteria after model calibration is performed using nonlinear regression techniques. In this work, we present a model selection case study, in the context of cell invasion, with a very detailed experimental data set. Using Bayesian analysis and information criteria, we demonstrate that model selection and model validation should account for both residual errors and model complexity. These considerations are often overlooked in the mathematical biology literature. The results we present here provide a clear methodology that can be used to guide model selection across a range of applications. Furthermore, the case study we present provides a clear example where neglecting the role of model complexity can give rise to misleading outcomes.


2015 ◽  
Vol 137 (1) ◽  
Author(s):  
David A. Romero ◽  
Veronica E. Marin ◽  
Cristina H. Amon

Metamodels, or surrogate models, have been proposed in the literature to reduce the resources (time/cost) invested in the design and optimization of engineering systems whose behavior is modeled using complex computer codes, in an area commonly known as simulation-based design optimization. Following the seminal paper of Sacks et al. (1989, “Design and Analysis of Computer Experiments,” Stat. Sci., 4(4), pp. 409–435), researchers have developed the field of design and analysis of computer experiments (DACE), focusing on different aspects of the problem such as experimental design, approximation methods, model fitting, model validation, and metamodeling-based optimization methods. Among these, model validation remains a key issue, as the reliability and trustworthiness of the results depend greatly on the quality of approximation of the metamodel. Typically, model validation involves calculating prediction errors of the metamodel using a data set different from the one used to build the model. Due to the high cost associated with computer experiments with simulation codes, validation approaches that do not require additional data points (samples) are preferable. However, it is documented that methods based on resampling, e.g., cross validation (CV), can exhibit oscillatory behavior during sequential/adaptive sampling and model refinement, thus making it difficult to quantify the approximation capabilities of the metamodels and/or to define rational stopping criteria for the metamodel refinement process. In this work, we present the results of a simulation experiment conducted to study the evolution of several error metrics during sequential model refinement, to estimate prediction errors, and to define proper stopping criteria without requiring additional samples beyond those used to build the metamodels. Our results show that it is possible to accurately estimate the predictive performance of Kriging metamodels without additional samples, and that leave-one-out CV errors perform poorly in this context. Based on our findings, we propose guidelines for choosing the sample size of computer experiments that use sequential/adaptive model refinement paradigm. We also propose a stopping criterion for sequential model refinement that does not require additional samples.


Sign in / Sign up

Export Citation Format

Share Document