scholarly journals Analysis of local habitat selection and large-scale attraction/avoidance based on animal tracking data: is there a single best method?

2020 ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

Abstract Background: New wildlife telemetry and tracking technologies have become available in the last decade, leading to a large increase in the volume and resolution of animal tracking data. These technical developments have been accompanied by various statistical tools aimed at analysing the data obtained by these methods. Methods: We used simulated habitat and tracking data to compare some of the different statistical methods frequently used to infer local resource selection and large-scale attraction/avoidance from tracking data. Notably, we compared the performances of spatial logistic regression models (SLRMs), point process models (PPMs), and integrated step selection models ((i)SSMs) and their interplays with habitat, tracking-device, and animal movement properties. Results: We demonstrated that SLRMs were inappropriate for large-scale attraction studies and prone to bias when inferring habitat selection. In contrast, PPMs and (i)SSMs showed comparable (unbiased) performances for both habitat selection and large-scale effect studies. However, (i)SSMs had several advantages over PPMs with respect to robustness, user-friendly implementation, and computation time. Conclusions: We recommend the use of (i)SSMs to infer habitat selection or large-scale attraction/avoidance from animal tracking data. This method has several practical advantages over PPMs and additionally extends SSMs, thus increasing its predictive capacity and allowing the derivation of mechanistic movement models.

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

Abstract Background New wildlife telemetry and tracking technologies have become available in the last decade, leading to a large increase in the volume and resolution of animal tracking data. These technical developments have been accompanied by various statistical tools aimed at analysing the data obtained by these methods. Methods We used simulated habitat and tracking data to compare some of the different statistical methods frequently used to infer local resource selection and large-scale attraction/avoidance from tracking data. Notably, we compared spatial logistic regression models (SLRMs), spatio-temporal point process models (ST-PPMs), step selection models (SSMs), and integrated step selection models (iSSMs) and their interplay with habitat and animal movement properties in terms of statistical hypothesis testing. Results We demonstrated that only iSSMs and ST-PPMs showed nominal type I error rates in all studied cases, whereas SSMs may slightly and SLRMs may frequently and strongly exceed these levels. iSSMs appeared to have on average a more robust and higher statistical power than ST-PPMs. Conclusions Based on our results, we recommend the use of iSSMs to infer habitat selection or large-scale attraction/avoidance from animal tracking data. Further advantages over other approaches include short computation times, predictive capacity, and the possibility of deriving mechanistic movement models.


2020 ◽  
Author(s):  
C. H. Fleming ◽  
J. Drescher-Lehman ◽  
M. J. Noonan ◽  
T. S. B. Akre ◽  
D. J. Brown ◽  
...  

AbstractAnimal tracking data are being collected more frequently, in greater detail, and on smaller taxa than ever before. These data hold the promise to increase the relevance of animal movement for understanding ecological processes, but this potential will only be fully realized if their accompanying location error is properly addressed. Historically, coarsely-sampled movement data have proved invaluable for understanding large scale processes (e.g., home range, habitat selection, etc.), but modern fine-scale data promise to unlock far more ecological information. While location error can often be ignored in coarsely sampled data, fine-scale data require much more care, and tools to do this have been lacking. Current approaches to dealing with location error largely fall into two categories—either discarding the least accurate location estimates prior to analysis or simultaneously fitting movement and error parameters in a hidden-state model. Unfortunately, both of these approaches have serious flaws. Here, we provide a general framework to account for location error in the analysis of animal tracking data, so that their potential can be unlocked. We apply our error-model-selection framework to 190 GPS, cellular, and acoustic devices representing 27 models from 14 manufacturers. Collectively, these devices are used to track a wide range of animal species comprising birds, fish, reptiles, and mammals of different sizes and with different behaviors, in urban, suburban, and wild settings. Then, using empirical data on tracked individuals from multiple species, we provide an overview of modern, error-informed movement analyses, including continuous-time path reconstruction, home-range distribution, home-range overlap, speed and distance estimation. Adding to these techniques, we introduce new error-informed estimators for outlier detection and autocorrelation visualization. We furthermore demonstrate how error-informed analyses on calibrated tracking data can be necessary to ensure that estimates are accurate and insensitive to location error, and allow researchers to use all of their data. Because error-induced biases depend on so many factors—sampling schedule, movement characteristics, tracking device, habitat, etc.—differential bias can easily confound biological inference and lead researchers to draw false conclusions.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

2010 ◽  
Vol 365 (1550) ◽  
pp. 2233-2244 ◽  
Author(s):  
John Fieberg ◽  
Jason Matthiopoulos ◽  
Mark Hebblewhite ◽  
Mark S. Boyce ◽  
Jacqueline L. Frair

With the advent of new technologies, animal locations are being collected at ever finer spatio-temporal scales. We review analytical methods for dealing with correlated data in the context of resource selection, including post hoc variance inflation techniques, ‘two-stage’ approaches based on models fit to each individual, generalized estimating equations and hierarchical mixed-effects models. These methods are applicable to a wide range of correlated data problems, but can be difficult to apply and remain especially challenging for use–availability sampling designs because the correlation structure for combinations of used and available points are not likely to follow common parametric forms. We also review emerging approaches to studying habitat selection that use fine-scale temporal data to arrive at biologically based definitions of available habitat, while naturally accounting for autocorrelation by modelling animal movement between telemetry locations. Sophisticated analyses that explicitly model correlation rather than consider it a nuisance, like mixed effects and state-space models, offer potentially novel insights into the process of resource selection, but additional work is needed to make them more generally applicable to large datasets based on the use–availability designs. Until then, variance inflation techniques and two-stage approaches should offer pragmatic and flexible approaches to modelling correlated data.


Author(s):  
Natasha J. Klappstein ◽  
Jonathan Potts ◽  
Théo Michelot ◽  
Luca Börger ◽  
Nicholas Pilfold ◽  
...  

1. Energetics are a key driver of animal decision-making, as survival depends on the balance between foraging benefits and movement costs. This fundamental perspective is often missing from habitat selection studies, which mainly describe correlations between space use and environmental features, rather than the mechanisms behind these correlations. To address this gap, we present a new modelling framework, the energy selection function (ESF), to assess how moving animals choose habitat based on energetic considerations. 2. The ESF considers that the likelihood of an animal selecting a movement step depends directly on the corresponding energetic gains and costs. The parameters of the ESF measure selection for energetic gains and against energetic costs; when estimated jointly, these provide inferences about foraging and movement strategies. The ESF can be implemented easily with standard conditional logistic regression software, allowing for fast inference. We outline a workflow, from data-gathering to statistical analysis, and use a case study of polar bears (Ursus maritimus) as an illustrative example. 3. We show how defining gains and costs at the scale of the movement step allows us to include detailed information about resource distribution, landscape resistance, and movement patterns. We demonstrate this in the polar bear case study, in which the results show how cost-minimization may arise in species that inhabit environments with an unpredictable distribution of energetic gains. 4. The ESF combines the energetic consequences of both movement and resource selection, thus incorporating a key aspect of evolutionary behaviour into habitat selection analysis. Because of its close links to existing habitat selection models, the ESF is widely applicable to any study system where energetic gains and costs can be derived, and has immense potential for methodological extensions.


2020 ◽  
Author(s):  
Pratik Rajan Gupte ◽  
Christine E Beardsworth ◽  
Orr Spiegel ◽  
Emmanuel Lourie ◽  
Sivan Toledo ◽  
...  

Modern, high-throughput animal tracking studies collect increasingly large volumes of data at very fine temporal scales. At these scales, location error can exceed the animal step size, confounding inferences from tracking data. Cleaning the data to exclude positions with large location errors prior to analyses is one of the main ways movement ecologists deal with location errors. Cleaning data to reduce location error before making biological inferences is widely recommended, and ecologists routinely consider cleaned data to be the ground-truth. Nonetheless, uniform guidance on this crucial step is scarce. Cleaning high-throughput data must strike a balance between rejecting location errors without discarding valid animal movements. Additionally, users of high-throughput systems face challenges resulting from the high volume of data itself, since processing large data volumes is computationally intensive and difficult without a common set of efficient tools. Furthermore, many methods that cluster movement tracks for ecological inference are based on statistical phenomena, and may not be intuitive to understand in terms of the tracked animal biology. In this article we introduce a pipeline to pre-process high-throughput animal tracking data in order to prepare it for subsequent analysis. We demonstrate this pipeline on simulated movement data to which we have randomly added location errors. We further suggest how large volumes of cleaned data may be synthesized into biologically meaningful residence patches. We then use calibration data to show how the pipeline improves its quality, and to verify that the residence patch synthesis accurately captures animal space-use. Finally, turning to real tracking data from Egyptian fruit bats (Rousettus aegyptiacus), we demonstrate the pre-processing pipeline and residence patch method in a fully worked out example. To help with fast implementations of our pipeline, and to help standardise methods, we developed the R package atlastools, which we introduce here. Our pre-processing pipeline and atlastools can be used with any high-throughput animal movement data in which the high data volume combined with knowledge of the tracked individuals biology can be used to reduce location errors. The use of common pre-processing steps that are simple yet robust promotes standardised methods in the field of movement ecology and better inferences from data.


Author(s):  
Sarah Davidson ◽  
Gil Bohrer ◽  
Andrea Kölzsch ◽  
Candace Vinciguerra ◽  
Roland Kays

Movebank, a global platform for animal tracking and other animal-borne sensor data, is used by over 3,000 researchers globally to harmonize, archive and share nearly 3 billion animal occurrence records and more than 3 billion other animal-borne sensor measurements that document the movements and behavior of over 1,000 species. Movebank’s publicly described data model (Kranstauber et al. 2011), vocabulary and application programming interfaces (APIs) provide services for users to automate data import and retrieval. Near-live data feeds are maintained in cooperation with over 20 manufacturers of animal-borne sensors, who provide data in agreed-upon formats for accurate data import. Data acquisition by API complies with public or controlled-access sharing settings, defined within the database by data owners. The Environmental Data Automated Track Annotation System (EnvDATA, Dodge et al. 2013) allows users to link animal tracking data with hundreds of environmental parameters from remote sensing and weather reanalysis products through the Movebank website, and offers an API for advanced users to automate the submission of annotation requests. Movebank's mobile apps, the Animal Tracker and Animal Tagger, use APIs to support reporting and monitoring while in the field, as well as communication with citizen scientists. The recently-launched MoveApps platform connects with Movebank data using an API to allow users to build, execute and share repeatable workflows for data exploration and analysis through a user-friendly interface. A new API, currently under development, will allow calls to retrieve data from Movebank reduced according to criteria defined by "reduction profiles", which can greatly reduce the volume of data transferred for many use cases. In addition to making this core set of Movebank services possible, Movebank's APIs enable the development of external applications, including the widely used R programming packages 'move' (Kranstauber et al. 2012) and 'ctmm' (Calabrese et al. 2016), and user-specific workflows to efficiently execute collaborative analyses and automate tasks such as syncing with local organizational and governmental websites and archives. The APIs also support large-scale data acquisition, including for projects under development to visualize, map and analyze bird migrations led by the British Trust for Ornithology, the coordinating organisation for European bird ringing (banding) schemes (EURING), Georgetown University, National Audubon Society, Smithsonian Institution and United Nations Convention on Migratory Species. Our API development is constrained by a lack of standardization in data reporting across animal-borne sensors and a need to ensure adequate communication with data users (e.g., how to properly interpret data; expectations for use and attribution) and data owners (e.g., who is using publicly-available data and how) when allowing automated data access. As interest in data linking, harvesting, mirroring and integration grows, we recognize needs to coordinate API development across animal tracking and biodiversity databases, and to develop a shared system for unique organism identifiers. Such a system would allow linking of information about individual animals within and across repositories and publications in order to recognize data for the same individuals across platforms, retain provenance and attribution information, and ensure beneficial and biologically meaningful data use.


2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


Author(s):  
Irina Gaus ◽  
Klaus Wieczorek ◽  
Juan Carlos Mayor ◽  
Thomas Trick ◽  
Jose´-Luis Garcia` Sin˜eriz ◽  
...  

The evolution of the engineered barrier system (EBS) of geological repositories for radioactive waste has been the subject of many research programmes during the last decade. The emphasis of the research activities was on the elaboration of a detailed understanding of the complex thermo-hydro-mechanical-chemical processes, which are expected to evolve in the early post closure period in the near field. It is important to understand the coupled THM-C processes and their evolution occurring in the EBS during the early post-closure phase so it can be confirmed that the safety functions will be fulfilled. Especially, it needs to be ensured that interactions during the resaturation phase (heat pulse, gas generation, non-uniform water uptake from the host rock) do not affect the performance of the EBS in terms of its safety-relevant parameters (e.g. swelling pressure, hydraulic conductivity, diffusivity). The 7th Framework PEBS project (Long Term Performance of Engineered Barrier Systems) aims at providing in depth process understanding for constraining the conceptual and parametric uncertainties in the context of long-term safety assessment. As part of the PEBS project a series of laboratory and URL experiments are envisaged to describe the EBS behaviour after repository closure when resaturation is taking place. In this paper the very early post-closure period is targeted when the EBS is subjected to high temperatures and unsaturated conditions with a low but increasing moisture content. So far the detailed thermo-hydraulic behaviour of a bentonite EBS in a clay host rock has not been evaluated at a large scale in response to temperatures of up to 140°C at the canister surface, produced by HLW (and spent fuel), as anticipated in some of the designs considered. Furthermore, earlier THM experiments have shown that upscaling of thermal conductivity and its dependency on water content and/or humidity from the laboratory scale to a field scale needs further attention. This early post-closure thermal behaviour will be elucidated by the HE-E experiment, a 1:2 scale heating experiment setup at the Mont Terri rock laboratory, that started in June 2011. It will characterise in detail the thermal conductivity at a large scale in both pure bentonite as well as a bentonite-sand mixture, and in the Opalinus Clay host rock. The HE-E experiment is especially designed as a model validation experiment at the large scale and a modelling programme was launched in parallel to the different experimental steps. Scoping calculations were run to help the experimental design and prediction exercises taking the final design into account are foreseen. Calibration and prediction/validation will follow making use of the obtained THM dataset. This benchmarking of THM process models and codes should enhance confidence in the predictive capability of the recently developed numerical tools. It is the ultimate aim to be able to extrapolate the key parameters that might influence the fulfilment of the safety functions defined for the long term steady state.


Sign in / Sign up

Export Citation Format

Share Document