scholarly journals Moderating Argos location errors in animal tracking data

2012 ◽  
Vol 3 (6) ◽  
pp. 999-1007 ◽  
Author(s):  
David C. Douglas ◽  
Rolf Weinzierl ◽  
Sarah C. Davidson ◽  
Roland Kays ◽  
Martin Wikelski ◽  
...  
2020 ◽  
Author(s):  
Pratik Rajan Gupte ◽  
Christine E Beardsworth ◽  
Orr Spiegel ◽  
Emmanuel Lourie ◽  
Sivan Toledo ◽  
...  

Modern, high-throughput animal tracking studies collect increasingly large volumes of data at very fine temporal scales. At these scales, location error can exceed the animal step size, confounding inferences from tracking data. Cleaning the data to exclude positions with large location errors prior to analyses is one of the main ways movement ecologists deal with location errors. Cleaning data to reduce location error before making biological inferences is widely recommended, and ecologists routinely consider cleaned data to be the ground-truth. Nonetheless, uniform guidance on this crucial step is scarce. Cleaning high-throughput data must strike a balance between rejecting location errors without discarding valid animal movements. Additionally, users of high-throughput systems face challenges resulting from the high volume of data itself, since processing large data volumes is computationally intensive and difficult without a common set of efficient tools. Furthermore, many methods that cluster movement tracks for ecological inference are based on statistical phenomena, and may not be intuitive to understand in terms of the tracked animal biology. In this article we introduce a pipeline to pre-process high-throughput animal tracking data in order to prepare it for subsequent analysis. We demonstrate this pipeline on simulated movement data to which we have randomly added location errors. We further suggest how large volumes of cleaned data may be synthesized into biologically meaningful residence patches. We then use calibration data to show how the pipeline improves its quality, and to verify that the residence patch synthesis accurately captures animal space-use. Finally, turning to real tracking data from Egyptian fruit bats (Rousettus aegyptiacus), we demonstrate the pre-processing pipeline and residence patch method in a fully worked out example. To help with fast implementations of our pipeline, and to help standardise methods, we developed the R package atlastools, which we introduce here. Our pre-processing pipeline and atlastools can be used with any high-throughput animal movement data in which the high data volume combined with knowledge of the tracked individuals biology can be used to reduce location errors. The use of common pre-processing steps that are simple yet robust promotes standardised methods in the field of movement ecology and better inferences from data.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

Abstract Background New wildlife telemetry and tracking technologies have become available in the last decade, leading to a large increase in the volume and resolution of animal tracking data. These technical developments have been accompanied by various statistical tools aimed at analysing the data obtained by these methods. Methods We used simulated habitat and tracking data to compare some of the different statistical methods frequently used to infer local resource selection and large-scale attraction/avoidance from tracking data. Notably, we compared spatial logistic regression models (SLRMs), spatio-temporal point process models (ST-PPMs), step selection models (SSMs), and integrated step selection models (iSSMs) and their interplay with habitat and animal movement properties in terms of statistical hypothesis testing. Results We demonstrated that only iSSMs and ST-PPMs showed nominal type I error rates in all studied cases, whereas SSMs may slightly and SLRMs may frequently and strongly exceed these levels. iSSMs appeared to have on average a more robust and higher statistical power than ST-PPMs. Conclusions Based on our results, we recommend the use of iSSMs to infer habitat selection or large-scale attraction/avoidance from animal tracking data. Further advantages over other approaches include short computation times, predictive capacity, and the possibility of deriving mechanistic movement models.


2019 ◽  
Vol 34 (5) ◽  
pp. 459-473 ◽  
Author(s):  
Graeme C. Hays ◽  
Helen Bailey ◽  
Steven J. Bograd ◽  
W. Don Bowen ◽  
Claudio Campagna ◽  
...  

2020 ◽  
Vol 10 (23) ◽  
pp. 13044-13056
Author(s):  
Ruben Evens ◽  
Greg Conway ◽  
Kirsty Franklin ◽  
Ian Henderson ◽  
Jennifer Stockdale ◽  
...  

Author(s):  
Tomasz Mrozewski

Movebank is a web-based data management tool and data archiving repository for animal tracking data. Its special focus enables specialized search and discovery tools for users searching for animal tracking data. However, there are some idiosyncrasies with the site and researcher update is not universal.


2019 ◽  
Vol 7 (1) ◽  
Author(s):  
Michael J. Noonan ◽  
Christen H. Fleming ◽  
Thomas S. Akre ◽  
Jonathan Drescher-Lehman ◽  
Eliezer Gurarie ◽  
...  

Abstract Background Speed and distance traveled provide quantifiable links between behavior and energetics, and are among the metrics most routinely estimated from animal tracking data. Researchers typically sum over the straight-line displacements (SLDs) between sampled locations to quantify distance traveled, while speed is estimated by dividing these displacements by time. Problematically, this approach is highly sensitive to the measurement scale, with biases subject to the sampling frequency, the tortuosity of the animal’s movement, and the amount of measurement error. Compounding the issue of scale-sensitivity, SLD estimates do not come equipped with confidence intervals to quantify their uncertainty. Methods To overcome the limitations of SLD estimation, we outline a continuous-time speed and distance (CTSD) estimation method. An inherent property of working in continuous-time is the ability to separate the underlying continuous-time movement process from the discrete-time sampling process, making these models less sensitive to the sampling schedule when estimating parameters. The first step of CTSD is to estimate the device’s error parameters to calibrate the measurement error. Once the errors have been calibrated, model selection techniques are employed to identify the best fit continuous-time movement model for the data. A simulation-based approach is then employed to sample from the distribution of trajectories conditional on the data, from which the mean speed estimate and its confidence intervals can be extracted. Results Using simulated data, we demonstrate how CTSD provides accurate, scale-insensitive estimates with reliable confidence intervals. When applied to empirical GPS data, we found that SLD estimates varied substantially with sampling frequency, whereas CTSD provided relatively consistent estimates, with often dramatic improvements over SLD. Conclusions The methods described in this study allow for the computationally efficient, scale-insensitive estimation of speed and distance traveled, without biases due to the sampling frequency, the tortuosity of the animal’s movement, or the amount of measurement error. In addition to being robust to the sampling schedule, the point estimates come equipped with confidence intervals, permitting formal statistical inference. All the methods developed in this study are now freely available in the package or the point-and-click web based graphical user interface.


Author(s):  
Jane Hunter ◽  
Charles Brooking ◽  
Wilfred Brimblecombe ◽  
Ross G. Dwyer ◽  
Hamish A. Campbell ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document