scholarly journals A flexible point and variance estimator to assess bird/bat fatality from carcass searches

2021 ◽  
Author(s):  
Moritz Mercker

Estimation of bird and bat fatalities due to collision with anthropogenic structures (such as power lines or wind turbines) is an important ecological issue. However, searching for collision victims usually only detects a proportion of the true number of collided individuals. Various mortality estimators have previously been proposed to correct for this incomplete detection, based on regular carcass searches and additional field experiments. However, each estimator implies specific assumptions/restrictions, which may easily be violated in practice. In this study, we extended previous approaches and developed a versatile algorithm to compute point and variance estimates for true carcass numbers. The presented method allows for maximal flexibility in the data structure. Using simulated data, we showed that our point and variance estimators ensured unbiased estimates under various challenging data conditions. The presented method may improve the estimation of true collision numbers, as an important pre-condition for calculating collision rates and evaluating measures to reduce collision risks, and may thus provide a basis for management decisions and/or compensation actions with regard to planned or existing wind turbines and power lines.

2020 ◽  
Author(s):  
Simon L Turner ◽  
Andrew B Forbes ◽  
Amalia Karahalios ◽  
Monica Taljaard ◽  
Joanne E McKenzie

AbstractInterrupted time series (ITS) studies are frequently used to evaluate the effects of population-level interventions or exposures. To our knowledge, no studies have compared the performance of different statistical methods for this design. We simulated data to compare the performance of a set of statistical methods under a range of scenarios which included different level and slope changes, varying lengths of series and magnitudes of autocorrelation. We also examined the performance of the Durbin-Watson (DW) test for detecting autocorrelation. All methods yielded unbiased estimates of the level and slope changes over all scenarios. The magnitude of autocorrelation was underestimated by all methods, however, restricted maximum likelihood (REML) yielded the least biased estimates. Underestimation of autocorrelation led to standard errors that were too small and coverage less than the nominal 95%. All methods performed better with longer time series, except for ordinary least squares (OLS) in the presence of autocorrelation and Newey-West for high values of autocorrelation. The DW test for the presence of autocorrelation performed poorly except for long series and large autocorrelation. From the methods evaluated, OLS was the preferred method in series with fewer than 12 points, while in longer series, REML was preferred. The DW test should not be relied upon to detect autocorrelation, except when the series is long. Care is needed when interpreting results from all methods, given confidence intervals will generally be too narrow. Further research is required to develop better performing methods for ITS, especially for short series.


Geophysics ◽  
1984 ◽  
Vol 49 (10) ◽  
pp. 1774-1780 ◽  
Author(s):  
F. Foster Morrison ◽  
Bruce C. Douglas

A comparison was made between Shepard’s method (inverse‐distance weighting) and collocation (linear filtering) for the purpose of predicting gravity anomalies. Tests were made with actual data from southern California and with simulated data created from buried point masses generated by a random number generator. The autocorrelation functions of the simulated and actual gravity data behaved very much alike. In general, the sophisticated collocation method did produce better results and very good variance estimates, compared with Shepard’s method, for simulated data. The advantage was less for actual data. The cost of the better results is the use of more computer time. The most important scientific conclusion of this study is that careful trend removal must be done and an adequate data sample obtained to produce truly optimal results from collocation. The variance estimates are much more sensitive to the form and calibration of the model autocorrelation function than are the prediction results.


2015 ◽  
Vol 11 (1) ◽  
pp. 91-114 ◽  
Author(s):  
J. Subramani ◽  
G. Kumarapandiyan

Abstract In this paper we have proposed a class of modified ratio type variance estimators for estimation of population variance of the study variable using the known parameters of the auxiliary variable. The bias and mean squared error of the proposed estimators are obtained and also derived the conditions for which the proposed estimators perform better than the traditional ratio type variance estimator and existing modified ratio type variance estimators. Further we have compared the proposed estimators with that of the traditional ratio type variance estimator and existing modified ratio type variance estimators for certain natural populations.


Author(s):  
Amir R. Nejad ◽  
Jone Torsvik

AbstractThis paper presents lessons learned from own research studies and field experiments with drivetrains on floating wind turbines over the last ten years. Drivetrains on floating support structures are exposed to wave-induced motions in addition to wind loading and motions. This study investigates the drivetrain-floater interactions from two different viewpoints: how drivetrain impacts the sub-structure design; and how drivetrain responses and life are affected by the floater and support structure motion. The first one is linked to the drivetrain technology and layout, while the second question addresses the influence of the wave-induced motion. The results for both perspectives are presented and discussed. Notably, it is highlighted that the effect of wave induced motions may not be as significant as the wind loading on the drivetrain responses particularly in larger turbines. Given the limited experience with floating wind turbines, however, more research is needed. The main aim with this article is to synthesize and share own research findings on the subject in the period since 2009, the year that the first full-scale floating wind turbine, Hywind Demo, entered operation in Norway.


2017 ◽  
Author(s):  
Hélène Jourdan-Pineau ◽  
Benjamin Pélissié ◽  
Elodie Chapuis ◽  
Floriane Chardonnet ◽  
Christine Pagès ◽  
...  

AbstractQuantitative genetics experiments aim at understanding and predicting the evolution of phenotypic traits. Running such experiments often bring the same questions: Should I bother with maternal effects? Could I estimate those effects? What is the best crossing scheme to obtain reliable estimates? Can I use molecular markers to spare time in the complex task of keeping track of the experimental pedigree?We explored those practical issues in the desert locust, Schistocerca gregaria using morphologic and coloration traits, known to be influenced by maternal effects. We ran quantitative genetic analyses with an experimental dataset and used simulations to explore i) the efficiency of animal models to accurately estimate both heritability and maternal effects, ii) the influence of crossing schemes on the precision of estimates and iii) the performance of a marker-based method compared to the pedigree-based method.The simulations indicated that maternal effects deeply affect heritability estimates and very large datasets are required to properly distinguish and estimate maternal effects and heritabilities. In particular, ignoring maternal effects in the animal model resulted in overestimation of heritabilities and a high rate of false positives whereas models specifying maternal variance suffer from lack of power. Maternal effects can be estimated more precisely than heritabilities but with low power. To obtain better estimates, bigger datasets are required and, in the presence of maternal effects, increasing the number of families over the number of offspring per families is recommended. Our simulations also showed that, in the desert locust, using relatedness based on available microsatellite markers may allow reasonably reliable estimates while rearing locusts in group.In the light of the simulation results, our experimental dataset suggested that maternal effects affected various phase traits. However the statistical limitations, revealed by the simulation approach, didn’t allow precise variance estimates. We stressed out that doing simulations is a useful step to design an experiment in quantitative genetics and interpret the outputs of the statistical models.


2017 ◽  
Vol 13 (24) ◽  
pp. 448
Author(s):  
Loai M. A. Al-Zou’bi ◽  
Amer I. Al-Omari ◽  
Ahmad M. Al-Khazalah ◽  
Raed A. Alzghool

Multilevel models can be used to account for clustering in data from multi-stage surveys. In some cases, the intra-cluster correlation may be close to zero, so that it may seem reasonable to ignore clustering and fit a single level model. This article proposes several adaptive strategies for allowing for clustering in regression analysis of multi-stage survey data. The approach is based on testing whether the cluster-level variance component is zero. If this hypothesis is retained, then variance estimates are calculated ignoring clustering; otherwise, clustering is reflected in variance estimation. A simple simulation study is used to evaluate the various procedures.


Author(s):  
Ying- Ying Zhang ◽  
Teng- Zhong Rong ◽  
Man- Man Li

It is interesting to calculate the variance of the variance estimator of the Bernoulli distribution. Therefore, we compare the Bootstrap and Delta Method variances of the variance estimator of the Bernoulli distribution in this paper. Firstly, we provide the correct Bootstrap, Delta Method, and true variances of the variance estimator of the Bernoulli distribution for three parameter values in Table 2.1. Secondly, we obtain the estimates of the variance of the variance estimator of the Bernoulli distribution by the Delta Method (analytically), the true method (analytically), and the Bootstrap Method (algorithmically). Thirdly, we compare the Bootstrap and Delta Methodsin terms of the variance estimates, the errors, and the absolute errors in three gures for 101 parameter values in [0, 1], with the purpose to explain the di erences between the Bootstrap and Delta Methods. Finally, we give three examples of the Bernoulli trials to illustrate the three methods.


2021 ◽  
Author(s):  
Zuzana Rošťáková ◽  
Roman Rosipal

Background and Objective: Parallel factor analysis (PARAFAC) is a powerful tool for detecting latent components in higher-order arrays (tensors). As an essential input parameter, the number of latent components should be set in advance. However, any component number selection method already proposed in the literature became a rule of thumb. The study demonstrates the advantages and disadvantages of twelve different methods applied to well-controlled simulated data with a nonnegative structure that mimics the character of a real electroencephalogram.Methods: Existing studies have compared the methods’ performance on simulated data with a simplified structure. It was shown that the obtained results are not directly generalizable to real data. Using a real head model and cortical activation, our study focuses on nontrivial and nonnegative simulated data that resemble real electroencephalogram properties as closely as possible. Different noise levels and disruptions from the optimal structure are considered. Moreover, we validate a new method for component number selection, which we have already successfully applied to real electroencephalogram tasks. We also demonstrate that the existing approaches must be adapted whenever a nonnegative data structure is assumed. Results: We identified four methods that produce promising but not ideal results on nontrivial simulated data and present superior performance in electroencephalogram analysis practice.Conclusions: Component number selection in PARAFAC is a complex and unresolved problem. The nonnegative data structure assumption makes the problem more challenging. Although several methods have shown promising results, the issue remains open, and new approaches are needed.


2020 ◽  
Vol 50 (12) ◽  
pp. 1405-1411
Author(s):  
Christoph Fischer ◽  
Joachim Saborowski

Double sampling for stratification (2SS) is a sampling design that is widely used for forest inventories. We present the mathematical derivation of two appropriate variance estimators for mean growth from repeated 2SS with updated stratification on each measurement occasion. Both estimators account for substratification based on the transition of sampling units among the strata due to the updated allocation. For the first estimator, sizes of the substrata were estimated from the second-phase sample (sample plots), whereas the respective sizes in the second variance estimator relied on the larger first-phase sample. The estimators were empirically compared with a modified version of Cochran’s well-known 2SS variance estimator that ignores substratification. This was done by performing bootstrap resampling on data from two German forest districts. The major findings were as follows: (i) accounting for substratification, as implemented in both new estimators, has substantial impact in terms of significantly smaller variance estimates and bias compared with the estimator without substratification, and (ii) the second estimator with substrata sizes being estimated from the first-phase sample shows a smaller bias than the first estimator.


Sign in / Sign up

Export Citation Format

Share Document