How to improve allometric equations to estimate forest biomass stocks? Some hints from a central African forest

2014 ◽  
Vol 44 (7) ◽  
pp. 685-691 ◽  
Author(s):  
Quentin Moundounga Mavouroulou ◽  
Alfred Ngomanda ◽  
Nestor Laurier Engone Obiang ◽  
Judicaël Lebamba ◽  
Hugues Gomat ◽  
...  

Predicting the biomass of a forest stand using forest inventory data and allometric equations involves a chain of propagation of errors going from the sampling error to the tree measurement error. Using a biomass data set of 101 trees in a tropical rain forest in Gabon, we compared two sources of error: the error due to the choice of allometric equation, assessed using Bayesian model averaging, and the biomass measurement error when tree biomass is calculated from tree volume rather than directly weighed. Differences between allometric equations resulted in a between-equation error of about 0.245 for log-transformed biomass compared with a residual within-equation error of 0.297. Because the residual error is leveled off when randomly accumulating trees whereas the between-equation error is incompressible, the latter turned out to be a major source of error at the scale of a 1 ha plot. Measuring volumes rather than masses resulted in an error of 0.241 for log-transformed biomass and an average overestimation of the biomass by 19%. These results confirmed the choice of the allometric equation as a major source of error but unexpectedly showed that measuring volumes could seriously bias biomass estimates.

Energies ◽  
2020 ◽  
Vol 13 (2) ◽  
pp. 295
Author(s):  
Matteo Spada ◽  
Peter Burgherr

The accident risk of severe (≥5 fatalities) accidents in fossil energy chains (Coal, Oil and Natural Gas) is analyzed. The full chain risk is assessed for Organization for Economic Co-operation and Development (OECD), 28 Member States of the European Union (EU28) and non-OECD countries. Furthermore, for Coal, Chinese data are analysed separately for three different periods, i.e., 1994–1999, 2000–2008 and 2009–2016, due to different data sources, and highly incomplete data prior to 1994. A Bayesian Model Averaging (BMA) is applied to investigate the risk and associated uncertainties of a comprehensive accident data set from the Paul Scherrer Institute’s ENergy-related Severe Accident Database (ENSAD). By means of BMA, frequency and severity distributions were established, and a final posterior distribution including model uncertainty is constructed by a weighted combination of the different models. The proposed approach, by dealing with lack of data and lack of knowledge, allows for a general reduction of the uncertainty in the calculated risk indicators, which is beneficial for informed decision-making strategies under uncertainty.


2019 ◽  
Vol 64 (5) ◽  
pp. 48-73
Author(s):  
Fryderyk Mirota

In empirical research significant diversity of corporate cash holdings speed of adjustment (SOA) estimates can be observed. It is possible that some of the results are affected by publication selection bias. Articles whose results are clearly in line with economic theories may be preferred by authors and reviewers and, consequently, conclusions from this area can be published more frequently. The aim of this article is to verify whether there is a publication selection bias with respect to studies related to corporate cash holdings adjustments and to investigate the sources of heterogeneity in cash holdings SOA estimates. The statistical method used in the study was meta-analysis, which allows a combined analysis of the results from independent research. Meta-analysis enables to verify the occurrence of the publication selection bias and to explain the heterogeneity of the results presented in articles. The study was based on data collected asa result of a review of the literature published between 2003 and 2017. On the basis of information on 402 estimates from 58 different studies it has been shown that the publication selection bias does not occur. The Bayesian Model Averaging was used for modelling. It was identified that the characteristics associated with the data set used in the study, model specification and the estimation method significantly affect the hetero-geneity of corporate cash holdings SOA estimates. This diversity is determined, among others, by the choice of estimation method, length of the period covered by the analysis and characteristics of the market environment of the concerned entities.


Author(s):  
G.J. Pierce ◽  
M.B. Santos ◽  
S. Cerviño

We use a bootstrap simulation framework to evaluate the relative importance of different sources of random and systematic error when estimating diet or food consumption of cetaceans, using a data set on harbour porpoise diet in Scottish (UK) waters from 1992–2003 (N=180) as a model. We also evaluate the consequences of applying explicit weightings to individual samples and/or sub-sets (‘strata’) of samples. In terms of the precision of estimates of diet composition, sampling error was the most important source of error, to the extent that overall 95% confidence limits changed only very slightly when sub-sampling error and regression errors were taken into account. On the other hand, for estimates of total food consumption by the porpoise population in Scottish waters, uncertainties about population size and energetic requirements were more important than uncertainty about diet composition. In relation to the accuracy of estimates of diet composition, the study also highlighted the importance of selecting regressions appropriate to prey in the study area (as opposed to ones constructed for the same prey species in another area) and demonstrated that applying equal weighting to individual samples or sample strata can substantially alter the resulting picture of diet. Therefore, the rationale for applying such weightings needs to be carefully considered.


2019 ◽  
Vol 10 (1) ◽  
pp. 72-84 ◽  
Author(s):  
Sydney Chikalipah

PurposeThe purpose of this paper is to examine the causal relationship between the copper price dynamics and economic growth in Zambia over the period from 1995 to 2015.Design/methodology/approachThe study uses a data set assembled from five difference sources: the heritage foundation; the London metal exchange index; the Penn World Tables version 9.0; the total economy database; and the World Bank Development Indicators. The paper employs the Bayesian Model Averaging (BMA) approach as the estimation technique.FindingsThe estimates demonstrate that there exists a positive and significant relationship between movements in copper prices and economic growth in Zambia. The study draws policy implications from these findings.Research limitations/implicationsThis study is limited to the period from 1995 to 2015, this is due to lack of data on the country’s institutional indicators, trade openness and the real exchange rate.Practical implicationsThere have been calls to diversify the economy of Zambia due to the recurring chaotic events, which are often induced by over-dependence on copper exports. Thus, the study findings will be useful to academia, policy makers and stakeholders with vested interest in the economy of Zambia.Originality/valueTo the best of the author’s knowledge, this is the first empirical study to investigate the causal relationship that exists between copper prices and economic growth in Zambia. The existing empirical studies in the domain have devoted their attention on establishing the relationship between commodity price movements and exchange rates in Zambia.


2016 ◽  
Vol 47 (1) ◽  
pp. 153-167 ◽  
Author(s):  
Shujuan Huang ◽  
Brian Hartman ◽  
Vytaras Brazauskas

Episode Treatment Groups (ETGs) classify related services into medically relevant and distinct units describing an episode of care. Proper model selection for those ETG-based costs is essential to adequately price and manage health insurance risks. The optimal claim cost model (or model probabilities) can vary depending on the disease. We compare four potential models (lognormal, gamma, log-skew-t and Lomax) using four different model selection methods (AIC and BIC weights, Random Forest feature classification and Bayesian model averaging) on 320 ETGs. Using the data from a major health insurer, which consists of more than 33 million observations from 9 million claimants, we compare the various methods on both speed and precision, and also examine the wide range of selected models for the different ETGs. Several case studies are provided for illustration. It is found that Random Forest feature selection is computationally efficient and sufficiently accurate, hence being preferred in this large data set. When feasible (on smaller data sets), Bayesian model averaging is preferred because of the posterior model probabilities.


Author(s):  
Don van den Bergh ◽  
Merlise A. Clyde ◽  
Akash R. Komarlu Narendra Gupta ◽  
Tim de Jong ◽  
Quentin F. Gronau ◽  
...  

AbstractLinear regression analyses commonly involve two consecutive stages of statistical inquiry. In the first stage, a single ‘best’ model is defined by a specific selection of relevant predictors; in the second stage, the regression coefficients of the winning model are used for prediction and for inference concerning the importance of the predictors. However, such second-stage inference ignores the model uncertainty from the first stage, resulting in overconfident parameter estimates that generalize poorly. These drawbacks can be overcome by model averaging, a technique that retains all models for inference, weighting each model’s contribution by its posterior probability. Although conceptually straightforward, model averaging is rarely used in applied research, possibly due to the lack of easily accessible software. To bridge the gap between theory and practice, we provide a tutorial on linear regression using Bayesian model averaging in , based on the BAS package in . Firstly, we provide theoretical background on linear regression, Bayesian inference, and Bayesian model averaging. Secondly, we demonstrate the method on an example data set from the World Happiness Report. Lastly, we discuss limitations of model averaging and directions for dealing with violations of model assumptions.


2020 ◽  
Author(s):  
Don van den Bergh ◽  
Merlise Aycock Clyde ◽  
Akash Raj ◽  
Tim de Jong ◽  
Quentin Frederik Gronau ◽  
...  

Linear regression analyses commonly involve two consecutive stages of statistical inquiry. In the first stage, a single ‘best’ model is defined by a specific selection of relevant predictors; in the second stage, the regression coefficients of the winning model are used for prediction and for inference concerning the importance of the predictors. However, such second-stage inference ignores the model uncertainty from the first stage, resulting in overconfident parameter estimates that generalize poorly. These drawbacks can be overcome by model averaging, a technique that retains all models for inference, weighting each model’s contribution by its posterior probability. Although conceptually straightforward, model averaging is rarely used in applied research, possibly due to the lack of easily accessible software. To bridge the gap between theory and practice, we provide a tutorial on linear regression using Bayesian model averaging in JASP, based on the BAS package in R. Firstly, we provide theoretical background on linear regression, Bayesian inference, and Bayesian model averaging. Secondly, we demonstrate the method on an example data set from the World Happiness Report. Lastly, we discuss limitations of model averaging and directions for dealing with violations of model assumptions.


2019 ◽  
Vol 11 (15) ◽  
pp. 1776 ◽  
Author(s):  
Weiyu Zhang ◽  
Xiaotong Zhang ◽  
Wenhong Li ◽  
Ning Hou ◽  
Yu Wei ◽  
...  

Surface incident shortwave radiation (SSR) is crucial for understanding the Earth’s climate change issues. Simulations from general circulation models (GCMs) are one of the most practical ways to produce long-term global SSR products. Although previous studies have comprehensively assessed the performance of the GCMs in simulating SSR globally or regionally, studies assessing the performance of these models over high-latitude areas are sparse. This study evaluated and intercompared the SSR simulations of 48 GCMs participating in the fifth phase of the Coupled Model Intercomparison Project (CMIP5) using quality-controlled SSR surface measurements at 44 radiation sites from three observation networks (GC-NET, BSRN, and GEBA) and the SSR retrievals from the Clouds and the Earth’s Radiant Energy System, Energy Balanced and Filled (CERES EBAF) data set over high-latitude areas from 2000 to 2005. Furthermore, this study evaluated the performance of the SSR estimations of two multimodel ensemble methods, i.e., the simple model averaging (SMA) and the Bayesian model averaging (BMA) methods. The seasonal performance of the SSR estimations of individual GCMs, the SMA method, and the BMA method were also intercompared. The evaluation results indicated that there were large deficiencies in the performance of the individual GCMs in simulating SSR, and these GCM SSR simulations did not show a tendency to overestimate the SSR over high-latitude areas. Moreover, the ensemble SSR estimations generated by the SMA and BMA methods were superior to all individual GCM SSR simulations over high-latitude areas, and the estimations of the BMA method were the best compared to individual GCM simulations and the SMA method-based estimations. Compared to the CERES EBAF SSR retrievals, the uncertainties of the SSR estimations of the GCMs, the SMA method, and the BMA method are relatively large during summer.


Sign in / Sign up

Export Citation Format

Share Document