scholarly journals New Models Used to Determine the Dioxins Total Amount and Toxicity (TEQ) in Atmospheric Emissions from Thermal Processes

Energies ◽  
2019 ◽  
Vol 12 (23) ◽  
pp. 4434
Author(s):  
Damià Palmer ◽  
Josep O. Pou ◽  
L. Gonzalez-Sabaté ◽  
Jordi Díaz-Ferrero ◽  
Juan A. Conesa ◽  
...  

In order to reduce the calculation effort during the simulation of the emission of polychlorinated dibenzo-p-dioxins and furans (PCDD/F) during municipal solid waste incineration, minimizing the number of simulated components is mandatory. For this purpose, two new multilinear regression models capable of determining the dioxins total amount and toxicity of an atmospheric emission have been adjusted based on previously published ones. The new source of data used (almost 200 PCDD/F analyses) provides a wider range of application to the models, increasing also the diversity of the emission sources, from industrial and laboratory scale thermal processes. Only three of the 17 toxic congeners (1,2,3,6,7,8-HxCDD, 2,3,7,8-TCDF and OCDF), whose formation was found to be linearly independent, were necessary as inputs for the models. All model parameters have been statistically validated and their confidence intervals have been calculated using the Bootstrap method. The resulting coefficients of determination (R2) for the models are 0.9711 ± 0.0056 and 0.9583 ± 0.0085; its root mean square errors (RMSE) are 0.2115 and 0.2424, and its mean absolute errors (MAE) are 0.1541 and 0.1733 respectively.

2008 ◽  
Vol 33 (3) ◽  
pp. 257-278 ◽  
Author(s):  
Yuming Liu ◽  
E. Matthew Schulz ◽  
Lei Yu

A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of Basic Skills®. For parallel and congeneric test forms within valid IRT true score ranges, the pattern and magnitude of standard errors of IRT true score equating estimated by the MCMC method were very close to those estimated by the bootstrap method. For tau-equivalent test forms, the pattern of standard errors estimated by the two methods was also similar. Bias and mean square errors of equating produced by the MCMC method were smaller than those produced by the bootstrap method; however, standard errors were larger. In educational testing, the MCMC method may be used as an additional or alternative procedure to the bootstrap method when evaluating the precision of equating results.


2020 ◽  
Vol 13 (6) ◽  
pp. 732-737
Author(s):  
Yi Tang ◽  
Arshad Ali ◽  
Li-Huan Feng

Abstract Aims In forest ecosystems, different types of regression models have been frequently used for the estimation of aboveground biomass, where Ordinary Least Squares (OLS) regression models are the most common prediction models. Yet, the relative performance of Bayesian and OLS models in predicting aboveground biomass of shrubs, especially multi-stem shrubs, has relatively been less studied in forests. Methods In this study, we developed the biomass prediction models for Caragana microphylla Lam. which is a widely distributed multi-stems shrub, and contributes to the decrease of wind erosion and the fixation of sand dunes in the Horqin Sand Land, one of the largest sand lands in China. We developed six types of formulations under the framework of the regression models, and then, selected the best model based on specific criteria. Consequently, we estimated the parameters of the best model with OLS and Bayesian methods with training and test data under different sample sizes with the bootstrap method. Lastly, we compared the performance of the OLS and Bayesian models in predicting the aboveground biomass of C. microphylla. Important Findings The performance of the allometric equation (power = 1) was best among six types of equations, even though all of those models were significant. The results showed that mean squared error of test data with non-informative prior Bayesian method and the informative prior Bayesian method was lower than with the OLS method. Among the tested predictors (i.e. plant height and basal diameter), we found that basal diameter was not a significant predictor either in OLS or Bayesian methods, indicating that suitable predictors and well-fitted models should be seriously considered. This study highlights that Bayesian methods, the bootstrap method and the type of allometric equation could help to improve the model accuracy in predicting shrub biomass in sandy lands.


Author(s):  
J. Susaki ◽  
H. Sato ◽  
A. Kuriki ◽  
K. Kajiwara ◽  
Y. Honda

Abstract. This paper examines algorithms for estimating terrestrial albedo from the products of the Global Change Observation Mission – Climate (GCOM-C) / Second-generation Global Imager (SGLI), which was launched in December 2017 by the Japan Aerospace Exploration Agency. We selected two algorithms: one based on a bidirectional reflectance distribution function (BRDF) model and one based on multi-regression models. The former determines kernel-driven BRDF model parameters from multiple sets of reflectance and estimates the land surface albedo from those parameters. The latter estimates the land surface albedo from a single set of reflectance with multi-regression models. The multi-regression models are derived for an arbitrary geometry from datasets of simulated albedo and multi-angular reflectance. In experiments using in situ multi-temporal data for barren land, deciduous broadleaf forests, and paddy fields, the albedos estimated by the BRDF-based and multi-regression-based algorithms achieve reasonable root-mean-square errors. However, the latter algorithm requires information about the land cover of the pixel of interest, and the variance of its estimated albedo is sensitive to the observation geometry. We therefore conclude that the BRDF-based algorithm is more robust and can be applied to SGLI operational albedo products for various applications, including climate-change research.


CAUCHY ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 95
Author(s):  
Ovi Delviyanti Saputri ◽  
Ferra Yanuar ◽  
Dodi Devianto

<span lang="DE">Quantile regression is a regression method with the approach of separating or dividing data into certain quantiles by minimizing the number of absolute values from asymmetrical errors to overcome unfulfilled assumptions, including the presence of autocorrelation. The resulting model parameters are tested for accuracy using the bootstrap method. The bootstrap method is a parameter estimation method by re-sampling from the original sample as much as R replication. The bootstrap trust interval was then used as a test consistency test algorithm constructed on the estimator by the quantile regression method. And test the uncommon quantile regression method with bootstrap method. The data obtained in this test is data replication 10 times. The biasness is calculated from the difference between the quantile estimate and bootstrap estimation. Quantile estimation methods are said to be unbiased if the standard deviation bias is less than the standard bootstrap deviation. This study proves that the estimated value with quantile regression is within the bootstrap percentile confidence interval and proves that 10 times replication produces a better estimation value compared to other replication measures. Quantile regression method in this study is also able to produce unbiased parameter estimation values.</span>


2003 ◽  
Vol 5 (3) ◽  
pp. 363 ◽  
Author(s):  
Slamet Sugiri

The main objective of this study is to examine a hypothesis that the predictive content of normal income disaggregated into operating income and nonoperating income outperforms that of aggregated normal income in predicting future cash flow. To test the hypothesis, linear regression models are developed. The model parameters are estimated based on fifty-five manufacturing firms listed in the Jakarta Stock Exchange (JSX) up to the end of 1997.This study finds that empirical evidence supports the hypothesis. This evidence supports arguments that, in reporting income from continuing operations, multiple-step approach is preferred to single-step one.


Universe ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. 8
Author(s):  
Alessandro Montoli ◽  
Marco Antonelli ◽  
Brynmor Haskell ◽  
Pierre Pizzochero

A common way to calculate the glitch activity of a pulsar is an ordinary linear regression of the observed cumulative glitch history. This method however is likely to underestimate the errors on the activity, as it implicitly assumes a (long-term) linear dependence between glitch sizes and waiting times, as well as equal variance, i.e., homoscedasticity, in the fit residuals, both assumptions that are not well justified from pulsar data. In this paper, we review the extrapolation of the glitch activity parameter and explore two alternatives: the relaxation of the homoscedasticity hypothesis in the linear fit and the use of the bootstrap technique. We find a larger uncertainty in the activity with respect to that obtained by ordinary linear regression, especially for those objects in which it can be significantly affected by a single glitch. We discuss how this affects the theoretical upper bound on the moment of inertia associated with the region of a neutron star containing the superfluid reservoir of angular momentum released in a stationary sequence of glitches. We find that this upper bound is less tight if one considers the uncertainty on the activity estimated with the bootstrap method and allows for models in which the superfluid reservoir is entirely in the crust.


1998 ◽  
Vol 217 (1) ◽  
Author(s):  
Hans Schneeberger

SummaryWith Efron’s law-school example the bootstrap method is compared with an alternative method, called doubling. It is shown, that the mean deviation of the estimator is always smaller for the doubling method.


Sign in / Sign up

Export Citation Format

Share Document