scholarly journals Do forests ‘fall silent’ following aerial applications of 1080 poison? Development and application of bird monitoring methods using automated sound recording devices

2021 ◽  
Author(s):  
◽  
Asher Cook

<p>Electronic bioacoustic techniques are providing new and effective ways of monitoring birds and have a number of advantages over other traditional monitoring methods. Given the increasing popularity of bioacoustic methods, and the difficulties associated with automated analyses (e.g. high Type I error rates), it is important that the most effective ways of scoring audio recordings are investigated. In Chapter Two I describe a novel sub-sampling and scoring technique (the ‘10 in 60 sec’ method) which estimates the vocal conspicuousness of bird species through the use of repeated presence-absence counts and compare its performance with a current manual method. The ‘10 in 60 sec’ approach reduced variability in estimates of vocal conspicuousness, significantly increased the number of species detected per count and reduced temporal autocorrelation. I propose that the ‘10 in 60 sec’ method will have greater overall ability to detect changes in underlying birdsong parameters and hence provide more informative data to scientists and conservation managers.  It is often anecdotally suggested that forests ‘fall silent’ and are devoid of birdsong following aerial 1080 operations. However, it is difficult to objectively assess the validity of this claim without quantitative information that addresses the claim specifically. Therefore in Chapter Three I applied the methodological framework outlined in Chapter Two to answer a controversial conservation question: Do New Zealand forests ‘fall silent’ after aerial 1080 operations? At the community level I found no evidence for a reduction in birdsong after the 1080 operation and eight out of the nine bird taxa showed no evidence for a decline in vocal conspicuousness. Only one species, tomtit (Petroica macrocephala), showed evidence for a decline in vocal conspicuousness, though this effect was non-significant after applying a correction for multiple tests.  In Chapter Four I used tomtits as a case study species to compare manual and automated approaches to: (1) estimating vocal conspicuousness and (2) determine the feasibility of using an automated detector on a New Zealand passerine. I found that data from the automated method were significantly positively correlated with the manual method although the relationship was not particularly strong (Pearson’s r = 0.62, P < 0.0001). The automated method suffered from a relatively high false negative rate and the data it produced did not reveal a decline in tomtit call rates following the 1080 drop. Given the relatively poor performance of the automated method, I propose that the automatic detector developed in this thesis requires further refinement before it is suitable for answering management-level questions for tomtit populations. However, as pattern recognition technology continues to improve automated methods are likely to become more viable in the future.</p>

2021 ◽  
Author(s):  
◽  
Asher Cook

<p>Electronic bioacoustic techniques are providing new and effective ways of monitoring birds and have a number of advantages over other traditional monitoring methods. Given the increasing popularity of bioacoustic methods, and the difficulties associated with automated analyses (e.g. high Type I error rates), it is important that the most effective ways of scoring audio recordings are investigated. In Chapter Two I describe a novel sub-sampling and scoring technique (the ‘10 in 60 sec’ method) which estimates the vocal conspicuousness of bird species through the use of repeated presence-absence counts and compare its performance with a current manual method. The ‘10 in 60 sec’ approach reduced variability in estimates of vocal conspicuousness, significantly increased the number of species detected per count and reduced temporal autocorrelation. I propose that the ‘10 in 60 sec’ method will have greater overall ability to detect changes in underlying birdsong parameters and hence provide more informative data to scientists and conservation managers.  It is often anecdotally suggested that forests ‘fall silent’ and are devoid of birdsong following aerial 1080 operations. However, it is difficult to objectively assess the validity of this claim without quantitative information that addresses the claim specifically. Therefore in Chapter Three I applied the methodological framework outlined in Chapter Two to answer a controversial conservation question: Do New Zealand forests ‘fall silent’ after aerial 1080 operations? At the community level I found no evidence for a reduction in birdsong after the 1080 operation and eight out of the nine bird taxa showed no evidence for a decline in vocal conspicuousness. Only one species, tomtit (Petroica macrocephala), showed evidence for a decline in vocal conspicuousness, though this effect was non-significant after applying a correction for multiple tests.  In Chapter Four I used tomtits as a case study species to compare manual and automated approaches to: (1) estimating vocal conspicuousness and (2) determine the feasibility of using an automated detector on a New Zealand passerine. I found that data from the automated method were significantly positively correlated with the manual method although the relationship was not particularly strong (Pearson’s r = 0.62, P < 0.0001). The automated method suffered from a relatively high false negative rate and the data it produced did not reveal a decline in tomtit call rates following the 1080 drop. Given the relatively poor performance of the automated method, I propose that the automatic detector developed in this thesis requires further refinement before it is suitable for answering management-level questions for tomtit populations. However, as pattern recognition technology continues to improve automated methods are likely to become more viable in the future.</p>


2009 ◽  
Vol 14 (3) ◽  
pp. 230-238 ◽  
Author(s):  
Xiaohua Douglas Zhang ◽  
Shane D. Marine ◽  
Marc Ferrer

For hit selection in genome-scale RNAi research, we do not want to miss small interfering RNAs (siRNAs) with large effects; meanwhile, we do not want to include siRNAs with small or no effects in the list of selected hits. There is a strong need to control both the false-negative rate (FNR), in which the siRNAs with large effects are not selected as hits, and the restricted false-positive rate (RFPR), in which the siRNAs with no or small effects are selected as hits. An error control method based on strictly standardized mean difference (SSMD) has been proposed to maintain a flexible and balanced control of FNR and RFPR. In this article, the authors illustrate how to maintain a balanced control of both FNR and RFPR using the plot of error rate versus SSMD as well as how to keep high powers using the plot of power versus SSMD in RNAi high-throughput screening experiments. There are relationships among FNR, RFPR, Type I and II errors, and power. Understanding the differences and links among these concepts is essential for people to use statistical terminology correctly and effectively for data analysis in genome-scale RNAi screens. Here the authors explore these differences and links. (Journal of Biomolecular Screening 2009:230-238)


1984 ◽  
Vol 11 (1) ◽  
pp. 11-18 ◽  
Author(s):  
W. Ted Hinds

Ecological monitoring is the purposeful observation, over time, of ecological processes in relation to stress. It differs from biological monitoring in that ecological monitoring does not consider the biota to be a surrogate filter to be analysed for contaminants, but rather has changes in the biotic processes as its focal point for observation of response to stress. Ecological monitoring methods aimed at detecting subtle or slow changes in ecological structure or function usually cannot be based on simple repetition of an arbitrarily chosen field measurement. An optimum method should be deliberately designed to be ecologically appropriate, statistically credible, and cost-efficient.Ecologically appropriate methods should consider the ecological processes that are most likely to respond to the stress of concern, so that relatively simple and well-defined measurements can be used. Statistical credibility requires that both Type I and Type II errors be addressed; Type I error (a false declaration of impact when none exists) and Type II error (a false declaration that no change has taken place or that an observed change is random) are about equally important in a monitoring context. Therefore, these error rates should probably be equal. Furthermore, the error rates should reflect the large inherent variability in undomesticated situations; the optimum may be 10%, rather than the traditional 5% or 1% for controlled experiments and observations.


2021 ◽  
Author(s):  
Maria Escobar ◽  
Guillaume Jeanneret ◽  
Laura Bravo-Sánchez ◽  
Angela Castillo ◽  
Catalina Gómez ◽  
...  

Abstract Massive molecular testing for COVID-19 has been pointed out as fundamental to moderate the spread of the pandemic. Pooling methods can enhance testing efficiency, but they are viable only at low incidences of the disease. We propose Smart Pooling, a machine learning method that uses clinical and sociodemographic data from patients to increase the efficiency of informed Dorfman testing for COVID-19 by arranging samples into all-negative pools. To do this, we ran an automated method to train numerous machine learning models on a retrospective dataset from more than 8,000 patients tested for SARS-CoV-2 from April to July 2020 in Bogotá, Colombia. We estimated the efficiency gains of using the predictor to support Dorfman testing by simulating the outcome of tests. We also computed the attainable efficiency gains of non-adaptive pooling schemes mathematically. Moreover, we measured the false-negative error rates in detecting the ORF1ab and N genes of the virus in RT-qPCR dilutions. Finally, we presented the efficiency gains of using our proposed pooling scheme on proof-of-concept pooled tests. We believe Smart Pooling will be efficient for optimizing massive testing of SARS-CoV-2.


2015 ◽  
Vol 36 (6) ◽  
pp. 3671 ◽  
Author(s):  
Gilberto Rodrigues Liska ◽  
Fortunato Silva de Menezes ◽  
Marcelo Angelo Cirillo ◽  
Flávio Meira Borém ◽  
Ricardo Miguel Cortez ◽  
...  

Automatic classification methods have been widely used in numerous situations and the boosting method has become known for use of a classification algorithm, which considers a set of training data and, from that set, constructs a classifier with reweighted versions of the training set. Given this characteristic, the aim of this study is to assess a sensory experiment related to acceptance tests with specialty coffees, with reference to both trained and untrained consumer groups. For the consumer group, four sensory characteristics were evaluated, such as aroma, body, sweetness, and final score, attributed to four types of specialty coffees. In order to obtain a classification rule that discriminates trained and untrained tasters, we used the conventional Fisher’s Linear Discriminant Analysis (LDA) and discriminant analysis via boosting algorithm (AdaBoost). The criteria used in the comparison of the two approaches were sensitivity, specificity, false positive rate, false negative rate, and accuracy of classification methods. Additionally, to evaluate the performance of the classifiers, the success rates and error rates were obtained by Monte Carlo simulation, considering 100 replicas of a random partition of 70% for the training set, and the remaining for the test set. It was concluded that the boosting method applied to discriminant analysis yielded a higher sensitivity rate in regard to the trained panel, at a value of 80.63% and, hence, reduction in the rate of false negatives, at 19.37%. Thus, the boosting method may be used as a means of improving the LDA classifier for discrimination of trained tasters.


2017 ◽  
Author(s):  
Olivier Naret ◽  
Nimisha Chaturvedi ◽  
Istvan Bartha ◽  
Christian Hammer ◽  
Jacques Fellay

Studies of host genetic determinants of pathogen sequence variation can identify sites of genomic conflicts, by highlighting variants that are implicated in immune response on the host side and adaptive escape on the pathogen side. However, systematic genetic differences in host and pathogen populations can lead to inflated type I (false positive) and type II (false negative) error rates in genome-wide association analyses. Here, we demonstrate through simulation that correcting for both host and pathogen stratification reduces spurious signals and increases power to detect real associations in a variety of tested scenarios. We confirm the validity of the simulations by showing comparable results in an analysis of paired human and HIV genomes.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Yair Daon ◽  
Amit Huppert ◽  
Uri Obolski

Pooling is a method of simultaneously testing multiple samples for the presence of pathogens. Pooling of SARS-CoV-2 tests is increasing in popularity, due to its high testing throughput. A popular pooling scheme is Dorfman pooling: test N individuals simultaneously, if the test is positive, each individual is then tested separately; otherwise, all are declared negative. Most analyses of the error rates of pooling schemes assume that including more than a single infected sample in a pooled test does not increase the probability of a positive outcome. We challenge this assumption with experimental data and suggest a novel and parsimonious probabilistic model for the outcomes of pooled tests. As an application, we analyse the false-negative rate (i.e. the probability of a negative result for an infected individual) of Dorfman pooling. We show that the false-negative rates under Dorfman pooling increase when the prevalence of infection decreases. However, low infection prevalence is exactly the condition when Dorfman pooling achieves highest throughput efficiency. We therefore urge the cautious use of pooling and development of pooling schemes that consider correctly accounting for tests’ error rates.


2017 ◽  
Vol 11 (3-4) ◽  
pp. 118 ◽  
Author(s):  
Rashid Khalid Sayyid ◽  
Dharmendra Dingar ◽  
Katherine Fleshner ◽  
Taylor Thorburn ◽  
Joshua Diamond ◽  
...  

Introduction: Repeat prostate biopsies in active surveillance patients are associated with significant complications. Novel imaging and blood/urine-based non-invasive tests are being developed to better predict disease grade and volume progression. We conducted a theoretical study to determine what test performance characteristics and costs would a non-invasive test(s) require in order for patients and their physicians to comfortably avoid biopsy.Methods: Surveys were administered to two populations to determine an acceptable false-negative rate and cost for such test(s). Active surveillance patients were recruited at time of followup in clinic at Princess Margaret Cancer Centre. Physician members of the Society of Urological Oncology were targeted via an online survey. Participants were questioned about their demographics and other characteristics that might influence chosen error rates and cost.Results: 136 patients and 670 physicians were surveyed, with 130 (95.6%) and 104 (15.5%) responses obtained, respectively. A vast majority of patients (90.6%) were comfortable with a non-invasive test(s) in place of biopsy, with 64.8% accepting a false-negative rate of 5‒20%. Most physicians (93.3%) were comfortable with a non-invasive test, with 77.9% accepting a rate of 5‒20%. Most patients and physicians felt that a cost of less than $1000 per administration would be reasonable.Conclusions: Most patients/physicians are comfortable with a non-invasive test(s). Although a 5% error rate seems acceptable to many, a substantial subset feels that 99% or higher negative predictive value is required. Thus, a personalized approach with shared decision-making between patients and physicians is essential to optimize patient care in such situations.


2005 ◽  
Vol 52 (6) ◽  
pp. 73-79 ◽  
Author(s):  
H. Sanderson ◽  
C.H. Stahl ◽  
R. Irwin ◽  
M.D. Rogers

Quantitative uncertainty assessments and the distribution of risk are under scrutiny and significant criticism has been made of null hypothesis testing when careful consideration of Type I (false positive) and II (false negative) error rates have not been taken into account. An alternative method, equivalence testing, is discussed yielding more transparency and potentially more precaution in the quantifiable uncertainty assessments. With thousands of chemicals needing regulation in the near future and low public trust in the regulatory process, decision models are required with transparency and learning processes to manage this task. Adaptive, iterative, and learning decision making tools and processes can help decision makers evaluate the significance of Type I or Type II errors on decision alternatives and can reduce the risk of committing Type III errors (accurate answers to the wrong questions). Simplistic cost-benefit based decision-making tools do not incorporate the complex interconnectedness characterizing environmental risks, nor do they enhance learning, participation, or include social values and ambiguity. Hence, better decision-making tools are required, and MIRA is an attempt to include some of the critical aspects.


Author(s):  
Lachlan J. Gunn ◽  
François Chapeau-Blondeau ◽  
Mark D. McDonnell ◽  
Bruce R. Davis ◽  
Andrew Allison ◽  
...  

Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith; however, rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence and (iii) cryptographic primality testing. In this paper, we investigate the effects of small error rates in a set of measurements or observations. We find that even with very low systemic failure rates, high confidence is surprisingly difficult to achieve; in particular, we find that certain analyses of cryptographically important numerical tests are highly optimistic, underestimating their false-negative rate by as much as a factor of 2 80 .


Sign in / Sign up

Export Citation Format

Share Document