scholarly journals Uncertainties Brought by Weight Assignment in Ecosystem Health Modelling

2018 ◽  
Vol 10 (1) ◽  
pp. 37
Author(s):  
Chuangye Song

Weight assignment is the most important step in ecosystem health modelling. However, few researches were conducted to test the uncertainties brought by weighting methods in ecosystem health modelling. In this research, aimed to test the rationality and uncertainties brought by objective weighting methods, we made a comparison between different objective weighting methods (Entropy, Variation coefficient, Mean square error, Critic). We found that (1) the weights assigned by different objective method are quite different; (2) the variation of sample size does not exert significant influences on weight assignment. However, the weight of indicator has the tendency of increasing or decreasing with the increment of sample size; (3) the weights assigned by these four objective methods were not able to reflect the actual relative importance of indicators. Therefore, we don’t advise to use objective weighting method as the sole approach to assign the weight of indicator in ecosystem health modelling.

Integers ◽  
2009 ◽  
Vol 9 (2) ◽  
Author(s):  
Paul Shaman

AbstractThe Levinson–Durbin recursion is used to construct the coefficients which define the minimum mean square error predictor of a new observation for a discrete time, second-order stationary stochastic process. As the sample size varies, the coefficients determine what is called a Levinson–Durbin sequence. A generalized Levinson–Durbin sequence is also defined, and we note that binomial coefficients constitute a special case of such a sequence. Generalized Levinson–Durbin sequences obey formulas which generalize relations satisfied by binomial coefficients. Some of these results are extended to vector stationary processes.


2015 ◽  
Vol 20 (2) ◽  
pp. 122-127 ◽  
Author(s):  
M.S. Panwar ◽  
Bapat Akanshya Sudhir ◽  
Rashmi Bundel ◽  
Sanjeev K. Tomer

This paper tries to derive maximum likelihood estimators (MLEs) for the parameters of the inverse Rayleigh distribution (IRD) when the observed data is masked. MLEs, asymptotic confidence intervals (ACIs) and boot-p confidence intervals (boot-p CIs) for the lifetime parameters have been discussed. The simulation illustrations provided that as the sample size increases the estimated value approaches to the true value, and the mean square error decreases with the increase in sample size, and mean square error increases with increase in level of masking, the ACIs are always symmetric and the boot-p CIs approaches to symmetry as the sample size increases whereas the mean life time due to the local spread of the disease is less than that due to the metastasis spread in case of real data set.Journal of Institute of Science and Technology, 2015, 20(2): 122-127


2021 ◽  
Vol 12 (2) ◽  
pp. 267-269
Author(s):  
Naseem Asghar ◽  
Umair Khalil ◽  
Dost Muhammad Khan ◽  
Zardad Khan ◽  
Iftikhar Ud Din

This study aims to describe sample size determination procedure in survival analysis using a real-world example. In this method simulation is used for sample size and precision calculations with censored data that concentrates on various sample sizes involved in carrying out the estimates and precision calculation. The Kaplan-Meier (K-M) estimator is chosen as a point estimator, and the precision measurement focuses on the mean square error, standard error, and confidence limits. Information obtained on the recovery time, in days, of patients from the population are compared with results taken from the sample group. Results showed a cutoff point of sample of size 675 on the basis of mean square error, standard error and confidence limit. 


2020 ◽  
Vol 12 (9) ◽  
pp. 1394 ◽  
Author(s):  
Sibila A. Genchi ◽  
Alejandro J. Vitale ◽  
Gerardo M. E. Perillo ◽  
Carina Seitz ◽  
Claudio A. Delrieux

Detailed knowledge of nearshore topography and bathymetry is required for a wide variety of purposes, including ecosystem protection, coastal management, and flood and erosion monitoring and research, among others. Both topography and bathymetry are usually studied separately; however, many scientific questions and challenges require an integrated approach. LiDAR technology is often the preferred data source for the generation of topobathymetric models, but because of its high cost, it is necessary to exploit other data sources. In this regard, the main goal of this study was to present a methodological proposal to generate a topobathymetric model, using low-cost unmanned platforms (unmanned aerial vehicle and unmanned surface vessel) in a very shallow/shallow and turbid tidal environment (Bahía Blanca estuary, Argentina). Moreover, a cross-analysis of the topobathymetric and the tide level data was conducted, to provide a classification of hydrogeomorphic zones. As a main result, a continuous terrain model was built, with a spatial resolution of approximately 0.08 m (topography) and 0.50 m (bathymetry). Concerning the structure from motion-derived topography, the accuracy gave a root mean square error of 0.09 m for the vertical plane. The best interpolated bathymetry (inverse distance weighting method), which was aligned to the topography (as reference), showed a root mean square error of 0.18 m (in average) and a mean absolute error of 0.05 m. The final topobathymetric model showed an adequate representation of the terrain, making it well suited for examining many landforms. This study helps to confirm the potential for remote sensing of shallow tidal environments by demonstrating how the data source heterogeneity can be exploited.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Yesi Santika ◽  
◽  
Widiarti Widiarti ◽  
Fitriani Fitriani ◽  
Mustofa Usman ◽  
...  

Small area estimation is defined as a statistical technique for estimating the parameters of a subpopulation with a small sample size. One method of estimating small area parameters is the Empirical Bayes (EB) method. The accuracy of the Empirical Bayes (EB) estimator can be measured by evaluating the Mean Squared Error (MSE). In this study, 3 methods to determine MSE in the EB estimator of the Beta-Bernoulli model will be compared, namely the Bootstrap, Jackknife Jiang and Area-specific Jackknife methods. The study is carried out theoretically and empirically through simulation with R-studio software version 1.2.5033. The simulation results in a number of areas and pairs of prior distribution parameter values, namely Beta, show the effect of sample size and parameter value pairs on the Mean Square Error (MSE) value. The larger the number of areas and the smaller the initial 𝛽, the smaller the MSE value. The area-specific Jackknife method produces the smallest MSE in the number of areas 100 and the Beta parameter value 0.1.


2020 ◽  
Author(s):  
Tiago Luciano Passafaro ◽  
Fernando B. Lopes ◽  
João R. R. Dórea ◽  
Mark Craven ◽  
Vivian Breen ◽  
...  

Abstract Background: Deep neural networks (DNN) are a particular case of artificial neural networks (ANN) composed by multiple hidden layers, and have recently gained attention in genome-enabled prediction of complex traits. Yet, few studies in genome-enabled prediction have assessed the performance of DNN compared to traditional regression models. Strikingly, no clear superiority of DNN has been reported so far, and results seem highly dependent on the species and traits of application. Nevertheless, the relatively small datasets used in previous studies, most with fewer than 5,000 observations may have precluded the full potential of DNN. Therefore, the objective of this study was to investigate the impact of the dataset sample size on the performance of DNN compared to Bayesian regression models for genome-enable prediction of body weight in broilers by sub-sampling 63,526 observations of the training set.Results: Predictive performance of DNN improved as sample size increased, reaching a plateau at about 0.32 of prediction correlation when 60% of the entire training set size was used (i.e., 39,510 observations). Interestingly, DNN showed superior prediction correlation using up to 3% of training set, but poorer prediction correlation after that compared to Bayesian Ridge Regression (BRR) and Bayes Cπ. Regardless the amount of data used to train the predictive machines, DNN displayed the lowest mean square error of prediction compared to all other approaches. The predictive bias was lower for DNN compared to Bayesian models regardless the amount of data used with estimates closed to one with larger sample sizes. Conclusions: DNN had worse prediction correlation compared to BRR and Bayes Cπ, but improved mean square error of prediction and bias relative to both Bayesian models for genome-enabled prediction of body weight in broilers. Such findings, highlights advantages and disadvantages between predictive approaches depending on the criterion used for comparison. Nonetheless, further analysis is necessary to detect scenarios where DNN can clearly outperform Bayesian benchmark models.


2013 ◽  
Vol 10 (1) ◽  
pp. 85-96
Author(s):  
Baghdad Science Journal

This deals with estimation of Reliability function and one shape parameter (?) of two- parameters Burr – XII , when ?(shape parameter is known) (?=0.5,1,1.5) and also the initial values of (?=1), while different sample shze n= 10, 20, 30, 50) bare used. The results depend on empirical study through simulation experiments are applied to compare the four methods of estimation, as well as computing the reliability function . The results of Mean square error indicates that Jacknif estimator is better than other three estimators , for all sample size and parameter values


2014 ◽  
Vol 23 (1) ◽  
pp. 61-68 ◽  
Author(s):  
Noorul Hassan Zardari ◽  
Irena Binti Naubi ◽  
Nur Asikin Binti Roslan ◽  
Sharif Moniruzzaman Shirazi

Abstract Listing of watershed management goals/targets is one of the integral parts of the management plan for a watershed. In this paper, we have listed 18 watershed management targets for which the Malaysian watersheds could possibly be managed in future. Based on the listed watershed management targets, the priority ranking of 18 targets is developed from the relative importance weights obtained from a survey conducted from 29 stakeholders. Three weighting methods (SWING, SMART, and SMARTER) were applied to elicit weights. We found that the SMART (Simple Multi-Attribute Rating Technique) weighting method was a favorable method for eliciting stable sets of weights for the watershed management targets. The SWING weighting method produces better weights than the SMARTER method. The listed watershed management targets will assist watershed managers and decision makers in decision making to use available resources (e.g. water quality, land-use, groundwater, and many other resources) in a more efficient and sustainable manner. The efficient utilization of all resources within a watershed will ultimately save watersheds (more specifically the urbanized watersheds) from further deterioration caused by unchecked infrastructure development activities


Sign in / Sign up

Export Citation Format

Share Document