scholarly journals Incorporating antenna detections into abundance estimates of fish

Author(s):  
Maria C. Dzul ◽  
Charles B. Yackulic ◽  
William Louis Kendall ◽  
Dana L Winkelman ◽  
Mary M. Conner ◽  
...  

Autonomous passive integrated transponder (PIT) tag antennas are commonly used to detect fish marked with PIT tags but cannot detect unmarked fish, creating challenges for abundance estimation. Here we describe an approach to estimate abundance from paired physical capture and antenna detection data in closed and open mark-recapture models. Additionally, for open models, we develop an approach that incorporates uncertainty in fish size, because fish size changes through time (as fish grow bigger) but is unknown if fish are not physically captured (e.g., only detected on antennas). Incorporation of size uncertainty allows for estimation of size-specific abundances and demonstrates a generally useful method for obtaining state-specific abundances estimates under state uncertainty. Simulation studies comparing models with and without antenna detections illustrate that the benefit of our approach increases as a larger proportion of the population is marked. When applied to two field data sets, our approach to incorporating antenna detections reduced uncertainty in abundance substantially. We conclude that PIT antennas hold great potential for improving abundance estimation, despite the challenges they present.

Author(s):  
Martyna Daria Swiatczak

AbstractThis study assesses the extent to which the two main Configurational Comparative Methods (CCMs), i.e. Qualitative Comparative Analysis (QCA) and Coincidence Analysis (CNA), produce different models. It further explains how this non-identity is due to the different algorithms upon which both methods are based, namely QCA’s Quine–McCluskey algorithm and the CNA algorithm. I offer an overview of the fundamental differences between QCA and CNA and demonstrate both underlying algorithms on three data sets of ascending proximity to real-world data. Subsequent simulation studies in scenarios of varying sample sizes and degrees of noise in the data show high overall ratios of non-identity between the QCA parsimonious solution and the CNA atomic solution for varying analytical choices, i.e. different consistency and coverage threshold values and ways to derive QCA’s parsimonious solution. Clarity on the contrasts between the two methods is supposed to enable scholars to make more informed decisions on their methodological approaches, enhance their understanding of what is happening behind the results generated by the software packages, and better navigate the interpretation of results. Clarity on the non-identity between the underlying algorithms and their consequences for the results is supposed to provide a basis for a methodological discussion about which method and which variants thereof are more successful in deriving which search target.


Geophysics ◽  
2011 ◽  
Vol 76 (6) ◽  
pp. V115-V128 ◽  
Author(s):  
Ning Wu ◽  
Yue Li ◽  
Baojun Yang

To remove surface waves from seismic records while preserving other seismic events of interest, we introduced a transform and a filter based on recent developments in image processing. The transform can be seen as a weighted Radon transform, in particular along linear trajectories. The weights in the transform are data dependent and designed to introduce large amplitude differences between surface waves and other events such that surface waves could be separated by a simple amplitude threshold. This is a key property of the filter and distinguishes this approach from others, such as conventional ones that use information on moveout ranges to apply a mask in the transform domain. Initial experiments with synthetic records and field data have demonstrated that, with the appropriate parameters, the proposed trace transform filter performs better both in terms of surface wave attenuation and reflected signal preservation than the conventional methods. Further experiments on larger data sets are needed to fully assess the method.


Weed Science ◽  
2007 ◽  
Vol 55 (6) ◽  
pp. 652-664 ◽  
Author(s):  
N. C. Wagner ◽  
B. D. Maxwell ◽  
M. L. Taper ◽  
L. J. Rew

To develop a more complete understanding of the ecological factors that regulate crop productivity, we tested the relative predictive power of yield models driven by five predictor variables: wheat and wild oat density, nitrogen and herbicide rate, and growing-season precipitation. Existing data sets were collected and used in a meta-analysis of the ability of at least two predictor variables to explain variations in wheat yield. Yield responses were asymptotic with increasing crop and weed density; however, asymptotic trends were lacking as herbicide and fertilizer levels were increased. Based on the independent field data, the three best-fitting models (in order) from the candidate set of models were a multiple regression equation that included all five predictor variables (R2= 0.71), a double-hyperbolic equation including three input predictor variables (R2= 0.63), and a nonlinear model including all five predictor variables (R2= 0.56). The double-hyperbolic, three-predictor model, which did not include herbicide and fertilizer influence on yield, performed slightly better than the five-variable nonlinear model including these predictors, illustrating the large amount of variation in wheat yield and the lack of concrete knowledge upon which farmers base their fertilizer and herbicide management decisions, especially when weed infestation causes competition for limited nitrogen and water. It was difficult to elucidate the ecological first principles in the noisy field data and to build effective models based on disjointed data sets, where none of the studies measured all five variables. To address this disparity, we conducted a five-variable full-factorial greenhouse experiment. Based on our five-variable greenhouse experiment, the best-fitting model was a new nonlinear equation including all five predictor variables and was shown to fit the greenhouse data better than four previously developed agronomic models with anR2of 0.66. Development of this mathematical model, through model selection and parameterization with field and greenhouse data, represents the initial step in building a decision support system for site-specific and variable-rate management of herbicide, fertilizer, and crop seeding rate that considers varying levels of available water and weed infestation.


1998 ◽  
Vol 55 (7) ◽  
pp. 1663-1673 ◽  
Author(s):  
M G Meekan ◽  
J J Dodson ◽  
S P Good ◽  
DAJ Ryan

The development of the relationship between otolith and body size in Atlantic salmon (Salmo salar) between hatching and emergence was examined by repeatedly measuring individually identified fish. Otolith growth increments were deposited daily in the period between hatching and emergence. Comparison of back-calculated otolith size and standard length using least squares regression analyses revealed a weak relationship between these variables at each of the 5-day sampling intervals. However, when data sets were pooled among intervals, variation in otolith size accounted for 98% of the variation in alevin length. A computer simulation demonstrated that levels of measurement error similar to those documented in our study resulted in the failure of regression analyses to detect strong relationships between otolith and fish size. Mortality that occurred during the experiment was strongly size selective. This truncated the size ranges of fish in cross-sectional data sets and thus reduced the ability of regression analysis to detect relationships between otolith and fish size. We propose that the weak relationship between otolith and fish size at emergence recorded in previous studies was an artifact of measurement error and the truncation of size ranges in regression analyses. Differences in alevin size at emergence were present at hatching and had been propagated by growth.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2146
Author(s):  
Abdulrahman Abouammoh ◽  
Mohamed Kayid

There are many proposed life models in the literature, based on Lindley distribution. In this paper, a unified approach is used to derive a general form for these life models. The present generalization greatly simplifies the derivation of new life distributions and significantly increases the number of lifetime models available for testing and fitting life data sets for biological, engineering, and other fields of life. Several distributions based on the disparity of the underlying weights of Lindley are shown to be special cases of these forms. Some basic statistical properties and reliability functions are derived for the general forms. In addition, comparisons among various forms are investigated. Moreover, the power distribution of this generalization has also been considered. Maximum likelihood estimator for complete and right-censored data has been discussed and in simulation studies, the efficiency and behavior of it have been investigated. Finally, the proposed models have been fit to some data sets.


2020 ◽  
Vol 224 (1) ◽  
pp. 669-681
Author(s):  
Sihong Wu ◽  
Qinghua Huang ◽  
Li Zhao

SUMMARY Late-time transient electromagnetic (TEM) data contain deep subsurface information and are important for resolving deeper electrical structures. However, due to their relatively small signal amplitudes, TEM responses later in time are often dominated by ambient noises. Therefore, noise removal is critical to the application of TEM data in imaging electrical structures at depth. De-noising techniques for TEM data have been developed rapidly in recent years. Although strong efforts have been made to improving the quality of the TEM responses, it is still a challenge to effectively extract the signals due to unpredictable and irregular noises. In this study, we develop a new type of neural network architecture by combining the long short-term memory (LSTM) network with the autoencoder structure to suppress noise in TEM signals. The resulting LSTM-autoencoders yield excellent performance on synthetic data sets including horizontal components of the electric field and vertical component of the magnetic field generated by different sources such as dipole, loop and grounded line sources. The relative errors between the de-noised data sets and the corresponding noise-free transients are below 1% for most of the sampling points. Notable improvement in the resistivity structure inversion result is achieved using the TEM data de-noised by the LSTM-autoencoder in comparison with several widely-used neural networks, especially for later-arriving signals that are important for constraining deeper structures. We demonstrate the effectiveness and general applicability of the LSTM-autoencoder by de-noising experiments using synthetic 1-D and 3-D TEM signals as well as field data sets. The field data from a fixed loop survey using multiple receivers are greatly improved after de-noising by the LSTM-autoencoder, resulting in more consistent inversion models with significantly increased exploration depth. The LSTM-autoencoder is capable of enhancing the quality of the TEM signals at later times, which enables us to better resolve deeper electrical structures.


Space Weather ◽  
2016 ◽  
Vol 14 (12) ◽  
pp. 1107-1124 ◽  
Author(s):  
B. V. Jackson ◽  
H.-S. Yu ◽  
A. Buffington ◽  
P. P. Hick ◽  
N. Nishimura ◽  
...  

2017 ◽  
Vol 36 (1) ◽  
pp. 82-94 ◽  
Author(s):  
Grant B. Morgan ◽  
Courtney A. Moore ◽  
Harlee S. Floyd

Although content validity—how well each item of an instrument represents the construct being measured—is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist the decision making of researchers who are developing or updating instruments. A hypothetical research study is presented in which the researcher needs to make choices that will guide his or her development of a universal behavior screener. In the study, he or she wishes to make choices regarding the number of items to include, the pilot sample size, average difficulty of the items, and the amount of information provided by the instrument at different cut scores. Simulation is then used to create data sets with varying levels of each of these aspects, and decisions are subsequently made regarding the levels that should be applied in the actual study. Rationale for these decisions as well as implications for practice are included.


Sign in / Sign up

Export Citation Format

Share Document