scholarly journals Ecological inference using data from accelerometers needs careful protocols

2021 ◽  
Author(s):  
Baptiste Garde ◽  
Rory P Wilson ◽  
Adam Fell ◽  
Nik Cole ◽  
Vikash Tatayah ◽  
...  

1. Accelerometers in animal-attached tags have proven to be powerful tools in behavioural ecology, being used to determine behaviour and provide proxies for movement-based energy expenditure. Researchers are collecting and archiving data across systems, seasons and device types. However, in order to use data repositories to draw ecological inference, we need to establish the error introduced according to sensor type and position on the study animal and establish protocols for error assessment and minimization. 2. Using laboratory trials, we examine the absolute accuracy of tri-axial accelerometers and determine how inaccuracies impact measurements of dynamic body acceleration (DBA), as the main acceleration-based proxy for energy expenditure. We then examine how tag type and placement affect the acceleration signal in birds using (i) pigeons Columba livia flying in a wind tunnel, with tags mounted simultaneously in two positions, (ii) back- and tail-mounted tags deployed on wild kittiwakes Rissa tridactyla. Finally, we (iii) present a case study where two generations of tag were deployed using different attachment procedures on red-tailed tropicbirds Phaethon rubricauda foraging in different seasons. 3. Bench tests showed that individual acceleration axes required a two-level correction (representing up to 4.3% of the total value) to eliminate measurement error. This resulted in DBA differences of up to 5% between calibrated and uncalibrated tags for humans walking at different speeds. Device position was associated with greater variation in DBA, with upper- and lower back-mounted tags in pigeons varying by 9%, and tail- and back-mounted tags varying by 13% in kittiwakes. Finally, DBA varied by 25% in tropicbirds between seasons, which may be attributable to tag attachment procedures. 4. Accelerometer accuracy, tag placement, and attachment details critically affect the signal amplitude and thereby the ability of the system to detect biologically meaningful phenomena. We propose a simple method to calibrate accelerometers that should be used prior to deployments and archived with resulting data, suggest a way that researchers can assess accuracy in previously collected data, and caution that variable tag placement and attachment can increase sensor noise and even generate trends that have no biological meaning.

2015 ◽  
Vol 770 ◽  
pp. 540-546 ◽  
Author(s):  
Yuri Eremenko ◽  
Dmitry Poleshchenko ◽  
Anton Glushchenko

The question about modern intelligent information processing methods usage for a ball mill filling level evaluation is considered. Vibration acceleration signal has been measured on a mill laboratory model for that purpose. It is made with accelerometer attached to a mill pin. The conclusion is made that mill filling level can not be measured with the help of such signal amplitude only. So this signal spectrum processed by a neural network is used. A training set for the neural network is formed with the help of spectral analysis methods. Trained neural network is able to find the correlation between mill pin vibration acceleration signal and mill filling level. Test set is formed from the data which is not included into the training set. This set is used in order to evaluate the network ability to evaluate the mill filling degree. The neural network guarantees no more than 7% error in the evaluation of mill filling level.


2018 ◽  
Vol 14 (1) ◽  
pp. 43-50 ◽  
Author(s):  
Anna Fitzpatrick ◽  
Joseph A Stone ◽  
Simon Choppin ◽  
John Kelley

Performance analysis and identifying performance characteristics associated with success are of great importance to players and coaches in any sport. However, while large amounts of data are available within elite tennis, very few players employ an analyst or attempt to exploit the data to enhance their performance; this is partly attributable to the considerable time and complex techniques required to interpret these large datasets. Using data from the 2016 and 2017 French Open tournaments, we tested the agreement between the results of a simple new method for identifying important performance characteristics (the Percentage of matches in which the Winner Outscored the Loser, PWOL) and the results of two standard statistical methods to establish the validity of the simple method. Spearman’s rank-order correlations between the results of the three methods demonstrated excellent agreement, with all methods identifying the same three performance characteristics ( points won of 0–4 rally length, baseline points won and first serve points won) as strongly associated with success. Consequently, we propose that the PWOL method is valid for identifying performance characteristics associated with success in tennis, and is therefore a suitable alternative to more complex statistical methods, as it is simpler to calculate, interpret and contextualise.


1980 ◽  
Vol 48 (3) ◽  
pp. 518-522 ◽  
Author(s):  
W. N. Stainbsy ◽  
L. B. Gladden ◽  
J. K. Barclay ◽  
B. A. Wilson

In evaluating the efficiency of humans performing exercise, base-line subtractions have been used in an attempt to determine the efficiency of the muscles in performing the external work. Despite the fact that base lines have been criticized previously, they have been widely used without adequate analysis of the implications involved. Calculations of efficiencies using data available in the literature for isolated muscle preparations revealed that base-line subtractions result in unreasonably high efficiencies. This suggests strongly that the base lines are invalid. To be valid, a base line must continue unchanged under all the conditions in which it is applied. Previously published data indicate clearly that exercise base lines change with increasing work rate and are therefore invalid. The use of base lines is further complicated by elastic energy storage in some types of exercise. Although exercise efficiencies using base line subtractions may be useful, they do not indicate muscle efficiency. Perhaps future studies of exercise metabolism should be directed less at refining base lines and more toward describing and quantifying the determinants of energy expenditure.


2019 ◽  
Vol 8 (1) ◽  
pp. 42-53
Author(s):  
Audhi Ahmad Balya ◽  
Marcella Alika Hutabarat ◽  
Djoni Hartono

The Main Objectives of this study are to check whether Indonesian households suffer from energy poverty or not, as well as to determine the accessibility to certain modern energy accesses (LPG and Electricity) and the energy cost burden that Indonesian households must bear. Using data from SUSENAS 2014, this research is conducted by utilizing descriptive statistics analysis and  Cross-Section OLS to achieve the objectives. It was found that there is no single Island Cluster in Indonesia suffers from energy cost burden. There are also differences in accessibility of modern energy and its relation to energy expenditure, especially in Maluku and Papua.


Web Services ◽  
2019 ◽  
pp. 803-821
Author(s):  
Thiago Poleto ◽  
Victor Diogho Heuer de Carvalho ◽  
Ana Paula Cabral Seixas Costa

Big Data is a radical shift or an incremental change for the existing digital infrastructures, that include the toolset used to aid the decision making process such as information systems, data repositories, formal modeling, and analysis of decisions. This work aims to provide a theoretical approach about the elements necessary to apply the big data concept in the decision making process. It identifies key components of the big data to define an integrated model of decision making using data mining, business intelligence, decision support systems, and organizational learning all working together to provide decision support with a reliable visualization of the decision-related opportunities. The concepts of data integration and semantic also was explored in order to demonstrate that, once mined, data must be integrated, ensuring conceptual connections and bequeathing meaning to use them appropriately for problem solving in decision.


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Sang Hoon Jang ◽  
Hyung Jin Shim

A simple method using the time-dependent Monte Carlo (TDMC) neutron transport calculation is presented to determine an effective detector position for the prompt neutron decay constant (α) measurement through the pulsed-neutron-source (PNS) experiment. In the proposed method, the optimum detector position is searched by comparing amplitudes of detector signals at different positions when their α estimates by the slope fitting are converged. The developed method is applied to the Pb-Bi-zoned ADS experimental benchmark at Kyoto University Critical Assembly. The α convergence time estimated by the TDMC PNS simulation agrees well with the experimental results. The α convergence time map and the corresponding signal amplitude map predicted by the developed method show that polyethylene moderator regions adjacent to fuel region are better positions than other candidates for the PNS α measurement.


2007 ◽  
Vol 16 (07n08) ◽  
pp. 2476-2483 ◽  
Author(s):  
◽  
MING SHAO ◽  
LIANG LI

Time-Of-Flight (TOF) based on Multi-gap Resistive Plate Chamber (MRPC) detectors have been successfully operating at the STAR experiment since 2003.2,3 The MRPC time resolution is however found to be significantly worse2 (80-90 ps) than that previously obtained in beam test (60 ps).4 In order to fully understand MRPC working principles and operating requirements, an extensive calibration study is performed using data collected by STAR in 200 GeV Au + Au collisions in 2004. The relation between MRPC timing, signal amplitude, incident angle and momentum are discussed. Contributions from tracking properties of STAR-TPC are also studied by simulation. The intrinsic time resolution of the MRPCs used in STAR-TOF, after taking all factors into consideration, is found to be in good agreement with beam test results.


2005 ◽  
Vol 42 ◽  
pp. 269-276 ◽  
Author(s):  
Mariusz Grabiec

AbstractWinter precipitation in the form of snow is the major factor determining accumulation on Arctic glaciers. In this paper, I present a simple method to assess snow accumulation on the glaciers of Svalbard. I deduce snow accumulation from the sum of winter precipitation and the fraction of precipitation of different types at a reference weather station. The accumulation is then converted to a relevant point on the glacier, using an accumulation gradient and a location coefficient. I apply this algorithm of accumulation assessment to eight glaciers of southern and central Spitsbergen using data from 23 seasons. On the basis of measured accumulation data, the mean error of the calculated accumulation, with no distinction of precipitation types, amounted to 23%. When the distinction between precipitation types is used for glaciers of southern Spitsbergen, the average error of estimation was 19%. Errors result from factors influencing accumulation distribution over the glacier elevation profile (e.g. glacier topography, orography of its surroundings, precipitation inversion). Application of this accumulation algorithm may provide a crucial method of estimating mass balance for glaciers not included in permanent monitoring.


2017 ◽  
Vol 9 (1) ◽  
pp. 16-31 ◽  
Author(s):  
Thiago Poleto ◽  
Victor Diogho Heuer de Carvalho ◽  
Ana Paula Cabral Seixas Costa

Big Data is a radical shift or an incremental change for the existing digital infrastructures, that include the toolset used to aid the decision making process such as information systems, data repositories, formal modeling, and analysis of decisions. This work aims to provide a theoretical approach about the elements necessary to apply the big data concept in the decision making process. It identifying key components of the big data to define an integrated model of decision making using data mining, business intelligence, decision support systems, and organizational learning all working together to provide decision support with a reliable visualization of the decision-related opportunities. The concepts of data integration and semantic also was explored in order to demonstrate that, once mined, data must be integrated, ensuring conceptual connections and bequeathing meaning to use them appropriately for problem solving in decision.


2005 ◽  
Vol 2005 ◽  
pp. 16-16
Author(s):  
B. J. Tolkamp ◽  
J. M. Yearsley ◽  
I. Kyriazakis

Food intake (FI) can be predicted on the basis of variables that describe food quality and the animal. Live weight (LW) is usually the only variable that is used to describe the animal. Animal fatness, as estimated by condition score (CS), can affect FI at a given LW. Body lipid produces signals (leptin) that affect energy intake and energy expenditure. If fatness acts on intake via its effect on energy expenditure, the effects of body lipid content on food intake can be incorporated into an existing intake model. Our objectives were to construct and test models that predict effects of fatness on intake and performance, using data obtained with ewe lambs to parameterise and test the models.


Sign in / Sign up

Export Citation Format

Share Document