scholarly journals Modelling the probability of detecting mass mortality events

2021 ◽  
Author(s):  
Jesse L Brunner ◽  
Justin M Calabrese

AbstractWhile reports of mass mortality events (MMEs) are increasing in the literature, comparing the incidence of MMEs through time, among locations, or taxa is problematic without accounting for detection probabilities. MMEs involving small, cryptic species can be difficult to detect even during the event, and degradation and scavenging of carcasses can make the window for detection very short. As such, the number or occurrence rate of MMEs may often be severely underestimated, especially with infrequent observations. We develop a simple modeling framework to quantify the probability of detecting an MME as a function of the observation frequency relative to the rate at which MMEs become undetectable. This framework facilitates the design of surveillance programs and may be extended to correct estimates of the incidence of MMEs from actual surveillance data for more appropriate analyses of trends through time and among taxa.

Entropy ◽  
2018 ◽  
Vol 20 (10) ◽  
pp. 752 ◽  
Author(s):  
Francesca Tria ◽  
Vittorio Loreto ◽  
Vito Servedio

Zipf’s, Heaps’ and Taylor’s laws are ubiquitous in many different systems where innovation processes are at play. Together, they represent a compelling set of stylized facts regarding the overall statistics, the innovation rate and the scaling of fluctuations for systems as diverse as written texts and cities, ecological systems and stock markets. Many modeling schemes have been proposed in literature to explain those laws, but only recently a modeling framework has been introduced that accounts for the emergence of those laws without deducing the emergence of one of the laws from the others or without ad hoc assumptions. This modeling framework is based on the concept of adjacent possible space and its key feature of being dynamically restructured while its boundaries get explored, i.e., conditional to the occurrence of novel events. Here, we illustrate this approach and show how this simple modeling framework, instantiated through a modified Pólya’s urn model, is able to reproduce Zipf’s, Heaps’ and Taylor’s laws within a unique self-consistent scheme. In addition, the same modeling scheme embraces other less common evolutionary laws (Hoppe’s model and Dirichlet processes) as particular cases.


Author(s):  
Matthias Greiner ◽  
Thomas Selhorst ◽  
Anne Balkema-Buschmann ◽  
Wesley O. Johnson ◽  
Christine Müller-Graf ◽  
...  

Quantitative risk assessments for Bovine pongiform ncephalopathy (BSE) necessitate estimates for key parameters such as the prevalence of infection, the probability of absence of infection in defined birth cohorts, and the numbers of BSE-infected, but non-detected cattle entering the food chain. We estimated three key parameters with adjustment for misclassification using the German BSE surveillance data using a Gompertz model for latent (i.e. unobserved) age-dependent detection probabilities and a Poisson response model for the number of BSE cases for birth cohorts 1999 to 2015. The models were combined in a Bayesian framework. We estimated the median true BSE prevalence between 3.74 and 0.216 cases per 100,000 animals for the birth cohorts 1990 to 2001 and observed a peak for the 1996 birth cohort with a point estimate of 16.41 cases per 100,000 cattle. For birth cohorts ranging from 2002 to 2013, the estimated median prevalence was below one case per 100,000 heads. The calculated confidence in freedom from disease (design prevalence 1 in 100,000) was above 99.5% for the birth cohorts 2002 to 2006. In conclusion, BSE surveillance in the healthy slaughtered cattle chain was extremely sensitive at the time, when BSE repeatedly occurred in Germany (2000–2009), because the entry of BSE-infected cattle into the food chain could virtually be prevented by the extensive surveillance program during these years and until 2015 (estimated non-detected cases/100.000 [95% credible interval] in 2000, 2009, and 2015 are 0.64 [0.5,0.8], 0.05 [0.01,0.14], and 0.19 [0.05,0.61], respectively).


Author(s):  
William L. Server ◽  
Randy G. Lott ◽  
Stan T. Rosinski

The mechanistically-guided embrittlement correlation model adopted in ASTM E 900-02 was based on a database of U.S. surveillance results current through calendar year 1998. There exists now an extensive amount of new surveillance data that includes a large amount of boiling water reactor (BWR) results from an integrated, supplemental surveillance program designed to augment the plant-specific BWR surveillance programs. These recent data allow a statistical test of the ASTM E 900-02 embrittlement correlation, as well as the NRC correlation model currently being used in the pressurized thermal shock (PTS) re-evaluation effort and the older Regulatory Guide 1.99, Revision 2 correlation. Even though the ASTM E 900-02 embrittlement correlation is a simplified version of the NRC model, a comparison of the two embrittlement correlation models utilizing the new database has proven to be revealing. Based on the new BWR data, both models are inadequate in their ability to predict BWR results; this inadequacy has even more significance for extrapolation outside of the database for BWR heat-up and cool-down curves, as well as some pressurized water reactor (PWR) heat-up curves. Other aspects of the two models, as revealed from this preliminary look at the new data, are presented.


2021 ◽  
Vol 193 (10) ◽  
Author(s):  
Jeremy A. Baumgardt ◽  
Michael L. Morrison ◽  
Leonard A. Brennan ◽  
Madeleine Thornley ◽  
Tyler A. Campbell

AbstractPopulation monitoring is fundamental for informing management decisions aimed at reducing the rapid rate of global biodiversity decline. Herpetofauna are experiencing declines worldwide and include species that are challenging to monitor. Raw counts and associated metrics such as richness indices are common for monitoring populations of herpetofauna; however, these methods are susceptible to bias as they fail to account for varying detection probabilities. Our goal was to develop a program for efficiently monitoring herpetofauna in southern Texas. Our objectives were to (1) estimate detection probabilities in an occupancy modeling framework using trap arrays for a diverse group of herpetofauna and (2) to evaluate the relative effectiveness of funnel traps, pitfall traps, and cover boards. We collected data with 36 arrays at 2 study sites in 2015 and 2016, for 2105 array-days resulting in 4839 detections of 51 species. We modeled occupancy for 21 species and found support for the hypothesis that detection probability varied over our sampling duration for 10 species and with rainfall for 10 species. For herpetofauna in our study, we found 14 and 12 species were most efficiently captured with funnel traps and pitfall traps, respectively, and no species were most efficiently captured with cover boards. Our results show that using methods that do not account for variations in detection probability are highly subject to bias unless the likelihood of false absences is minimized with exceptionally long capture durations. For monitoring herpetofauna in southern Texas, we recommend using arrays with funnel and pitfall traps and an analytical method such as occupancy modeling that accounts for variation in detection.


2017 ◽  
Vol 44 (7) ◽  
pp. 514 ◽  
Author(s):  
Jacinta E. Humphrey ◽  
Kylie A. Robert ◽  
Steve W. J. Leonard

Context Cryptic (i.e. secretive, elusive or well camouflaged) species are often very challenging to accurately survey. Because many cryptic species are threatened, the development of robust and efficient survey methods to detect them is critically important for conservation management. The swamp skink (Lissolepis coventryi) is an example of an elusive and threatened species; it inhabits densely vegetated, wet environments throughout south-east Australia. The swamp skink occurs in peri-urban areas and faces many human-induced threats including habitat loss, introduced predators and environmental pollution. Effective and reliable survey methods are therefore essential for its conservation. Aims This study aimed to review the current swamp skink survey guidelines to compare the detection success of Elliott traps with two alternative methods: passive infrared cameras (camera traps) and artificial refuges. Methods Detection probabilities for the swamp skink were compared using Elliott traps, artificial refuges and camera traps at two known populations on the Mornington Peninsula, Victoria, Australia. Key results Artificial refuges and camera traps were significantly more successful than Elliott traps at detecting swamp skinks. Conclusions Elliott traps are currently regarded as the standard technique for surveying swamp skinks; however, these traps were the least successful of the three methods trialled. Therefore, the use of Elliott traps in future swamp skink presence–absence surveys is not recommended. Implications Many previous surveys utilising Elliott traps have failed to detect swamp skinks in habitats where they are likely to occur. Our findings suggest that at least some of these past surveys may have reported false absences of swamp skinks, potentially resulting in poor planning decisions. A reduction in the reliance on Elliott trapping is likely to increase future swamp skink detection success, broaden our understanding of this cryptic species and aid conservation efforts. Our results emphasise that it is essential to regularly review recommended survey methods to ensure they are accurate and effective for target species.


NeoBiota ◽  
2020 ◽  
Vol 60 ◽  
pp. 117-136
Author(s):  
Adam S. Smart ◽  
Reid Tingley ◽  
Ben L. Phillips

Islands are increasingly used to protect endangered populations from the negative impacts of invasive species. Quarantine efforts on islands are likely to be undervalued in circumstances in which a failure incurs non-economic costs. One approach to ascribe monetary value to such efforts is by modeling the expense of restoring a system to its former state. Using field-based removal experiments on two different islands off northern Australia separated by > 400 km, we estimate cane toad densities, detection probabilities, and the resulting effort needed to eradicate toads from an island. We use these estimates to conservatively evaluate the financial benefit of cane toad quarantine across offshore islands prioritized for conservation management by the Australian federal government. We calculate density as animals per km of freshwater shoreline, and find striking concordance of density estimates across our two island study sites: a mean density of 352 [289, 466] adult toads per kilometre on one island, and a density of 341 [298, 390] on the second. Detection probability differed between our two study islands (Horan Island: 0.1 [0.07, 0.13]; Indian Island: 0.27 [0.22, 0.33]). Using a removal model and the financial costs incurred during toad removal, we estimate that eradicating cane toads would, on average, cost between $22 487 [$14 691, $34 480] (based on Horan Island) and $39 724 [$22 069, $64 001] AUD (Indian Island) per km of available freshwater shoreline. We estimate the remaining value of toad quarantine across islands that have been prioritized for conservation benefit within the toads’ predicted range, and find the net value of quarantine efforts to be $43.4 [28.4–66.6] – $76.7 [42.6–123.6] M depending on which island dataset is used to calibrate the model. We conservatively estimate the potential value of a mainland cane toad containment strategy – to prevent the spread of toads into the Pilbara Bioregion – to be $80 [52.6–123.4] – $142 [79.0–229.0] M. We present a modeling framework that can be used to estimate the value of preventative management, via estimating the length and cost of an eradication program. Our analyses suggest that there is substantial economic value in cane toad quarantine efforts across Australian offshore islands and in a proposed mainland containment strategy.


2017 ◽  
Vol 47 (2) ◽  
pp. 437-465 ◽  
Author(s):  
Peng Shi ◽  
Kun Shi

AbstractIn non-life insurance, territory-based risk classification is useful for various insurance operations including marketing, underwriting, ratemaking, etc. This paper proposes a spatially dependent frequency-severity modeling framework to produce territorial risk scores. The framework applies to the aggregated insurance claims where the frequency and severity components examine the occurrence rate and average size of insurance claims in each geographic unit, respectively. We employ the bivariate conditional autoregressive models to accommodate the spatial dependency in the frequency and severity components, as well as the cross-sectional association between the two components. Using a town-level claims data of automobile insurance in Massachusetts, we demonstrate applications of the model output–territorial risk scores–in ratemaking and market segmentation.


Sign in / Sign up

Export Citation Format

Share Document