scholarly journals A ‘How-to’ Guide for Interpreting Parameters in Resource- and Step-Selection Analyses

2020 ◽  
Author(s):  
John Fieberg ◽  
Johannes Signer ◽  
Brian Smith ◽  
Tal Avgar

AbstractResource-selection and step-selection analyses allow researchers to link animals to their environment and are commonly used to address questions related to wildlife management and conservation efforts. Step-selection analyses that incorporate movement characteristics, referred to as integrated step-selection analyses, are particularly appealing because they allow modeling of both movement and habitat-selection processes.Despite their popularity, many users struggle with interpreting parameters in resource-selection and step-selection functions. Integrated step-selection analyses also require several additional steps to translate model parameters into a full-fledged movement model, and the mathematics supporting this approach can be challenging for biologists to understand.Using simple examples, we demonstrate how weighted distribution theory and the inhomogeneous Poisson point-process model can facilitate parameter interpretation in resource-selection and step-selection analyses. Further, we provide a “how to” guide illustrating the steps required to implement integrated step-selection analyses using the amt package.By providing clear examples with open-source code, we hope to make resource-selection and integrated step-selection analyses more understandable and accessible to end users.

2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Xiao Zhang ◽  
Hongduo Zhao

The objective of this paper is to investigate the characterization of moisture diffusion inside early-age concrete slabs subjected to curing. Time-dependent relative humidity (RH) distributions of three mixture proportions subjected to three different curing methods (i.e., air curing, water curing, and membrane-forming compounds curing) and sealed condition were measured for 28 days. A one-dimensional nonlinear moisture diffusion partial differential equation (PDE) based on Fick’s second law, which incorporates the effect of curing in the Dirichlet boundary condition using a concept of curing factor, is developed to simulate the diffusion process. Model parameters are calibrated by a genetic algorithm (GA). Experimental results show that the RH reducing rate inside concrete under air curing is greater than the rates under membrane-forming compound curing and water curing. It is shown that the effect of water-to-cement (w/c) ratio on self-desiccation is significant. Lower w/c ratio tends to result in larger RH reduction. RH reduction considering both effect of diffusion and self-desiccation in early-age concrete is not sensitive to w/c ratio, but to curing method. Comparison between model simulation and experimental results indicates that the improved model is able to reflect the effect of curing on moisture diffusion in early-age concrete slabs.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Saša Milojević ◽  
Radivoje Pešić

Compression ratio has very important influence on fuel economy, emission, and other performances of internal combustion engines. Application of variable compression ratio in diesel engines has a number of benefits, such as limiting maximal in cylinder pressure and extended field of the optimal operating regime to the prime requirements: consumption, power, emission, noise, and multifuel capability. The manuscript presents also the patented mechanism for automatic change engine compression ratio with two-piece connecting rod. Beside experimental research, modeling of combustion process of diesel engine with direct injection has been performed. The basic problem, selection of the parameters in double Vibe function used for modeling the diesel engine combustion process, also performed for different compression ratio values. The optimal compression ratio value was defined regarding minimal fuel consumption and exhaust emission. For this purpose the test bench in the Laboratory for Engines of the Faculty of Engineering, University of Kragujevac, is brought into operation.


1993 ◽  
Vol 28 (11-12) ◽  
pp. 163-171 ◽  
Author(s):  
Weibo (Weber) Yuan ◽  
David Okrent ◽  
Michael K. Stenstrom

A model calibration algorithm is developed for the high-purity oxygen activated sludge process (HPO-ASP). The algorithm is evaluated under different conditions to determine the effect of the following factors on the performance of the algorithm: data quality, number of observations, and number of parameters to be estimated. The process model used in this investigation is the first HPO-ASP model based upon the IAWQ (formerly IAWPRC) Activated Sludge Model No. 1. The objective function is formulated as a relative least-squares function and the non-linear, constrained minimization problem is solved by the Complex method. The stoichiometric and kinetic coefficients of the IAWQ activated sludge model are the parameters focused on in this investigation. Observations used are generated numerically but are made close to the observations from a full-scale high-purity oxygen treatment plant. The calibration algorithm is capable of correctly estimating model parameters even if the observations are severely noise-corrupted. The accuracy of estimation deteriorates gradually with the increase of observation errors. The accuracy of calibration improves when the number of observations (n) increases, but the improvement becomes insignificant when n>96. It is also found that there exists an optimal number of parameters that can be rigorously estimated from a given set of information/data. A sensitivity analysis is conducted to determine what parameters to estimate and to evaluate the potential benefits resulted from collecting additional measurements.


2019 ◽  
Author(s):  
Mohsen Yaghoubi ◽  
Amin Adibi ◽  
Zafar Zafari ◽  
J Mark FitzGerald ◽  
Shawn D. Aaron ◽  
...  

AbstractBackgroundAsthma diagnosis in the community is often made without objective testing.ObjectiveThe aim of this study was to evaluate the cost-effectiveness of implementing a stepwise objective diagnostic verification algorithm among patients with community-diagnosed asthma in the United States (US).MethodsWe developed a probabilistic time-in-state cohort model that compared a stepwise asthma verification algorithm based on spirometry and methacholine challenge test against the current standard of care over 20 years. Model input parameters were informed from the literature and with original data analyses when required. The target population was US adults (≥15 y/o) with physician-diagnosed asthma. The final outcomes were costs (in 2018 $) and quality-adjusted life years (QALYs), discounted at 3% annually. Deterministic and probabilistic analyses were undertaken to examine the effect of alternative assumptions and uncertainty in model parameters on the results.ResultsIn a simulated cohort of 10,000 adults with diagnosed asthma, the stepwise algorithm resulted in the removal of diagnosis in 3,366. This was projected to be associated with savings of $36.26 million in direct costs and a gain of 4,049.28 QALYs over 20 years. Extrapolating these results to the US population indicated an undiscounted potential savings of $56.48 billion over 20 years. Results were robust against alternative assumptions and plausible changes in values of input parameters.ConclusionImplementation of a simple diagnostic testing algorithm to verify asthma diagnosis might result in substantial savings and improvement in patients’ quality of life.Key MessagesCompared with current standards of practice, the implementation of an asthma verification algorithm among US adults with diagnosed asthma can be associated with reduction in costs and gain in quality of life.There is substantial room for improving patient care and outcomes through promoting objective asthma diagnosis.Capsule summaryAsthma ‘overdiagnosis’ is common among US adults. An objective, stepwise verification algorithm for re-evaluation of asthma diagnosis can result in substantial savings in costs and improvements in quality of life.


2013 ◽  
Vol 554-557 ◽  
pp. 1045-1054 ◽  
Author(s):  
Welf Guntram Drossel ◽  
Reinhard Mauermann ◽  
Raik Grützner ◽  
Danilo Mattheß

In this study a numerical simulation model was designed for representing the joining process of carbon fiber-reinforced plastics (CFRP) and aluminum alloy with semi-tubular self-piercing rivet. The first step towards this goal is to analyze the piercing process of CFRP numerical and experimental. Thereby the essential process parameters, tool geometries and material characteristics are determined and in finite element model represented. Subsequently the finite element model will be verified and calibrated by experimental studies. The next step is the integration of the calibrated model parameters from the piercing process in the extensive simulation model of self-piercing rivet process. The comparison between the measured and computed values, e.g. process parameters and the geometrical connection characteristics, shows the reached quality of the process model. The presented method provides an experimental reliable characterization of the damage of the composite material and an evaluation of the connection performances, regarding the anisotropic property of CFRP.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2467 ◽  
Author(s):  
Hery Mwenegoha ◽  
Terry Moore ◽  
James Pinchin ◽  
Mark Jabbal

The dominant navigation system for low-cost, mass-market Unmanned Aerial Vehicles (UAVs) is based on an Inertial Navigation System (INS) coupled with a Global Navigation Satellite System (GNSS). However, problems tend to arise during periods of GNSS outage where the navigation solution degrades rapidly. Therefore, this paper details a model-based integration approach for fixed wing UAVs, using the Vehicle Dynamics Model (VDM) as the main process model aided by low-cost Micro-Electro-Mechanical Systems (MEMS) inertial sensors and GNSS measurements with moment of inertia calibration using an Unscented Kalman Filter (UKF). Results show that the position error does not exceed 14.5 m in all directions after 140 s of GNSS outage. Roll and pitch errors are bounded to 0.06 degrees and the error in yaw grows slowly to 0.65 degrees after 140 s of GNSS outage. The filter is able to estimate model parameters and even the moment of inertia terms even with significant coupling between them. Pitch and yaw moment coefficient terms present significant cross coupling while roll moment terms seem to be decorrelated from all of the other terms, whilst more dynamic manoeuvres could help to improve the overall observability of the parameters.


Ecology ◽  
2018 ◽  
Vol 100 (1) ◽  
Author(s):  
Théo Michelot ◽  
Paul G. Blackwell ◽  
Jason Matthiopoulos

Author(s):  
Yanwen Xu ◽  
Pingfeng Wang

Abstract The Gaussian Process (GP) model has become one of the most popular methods to develop computationally efficient surrogate models in many engineering design applications, including simulation-based design optimization and uncertainty analysis. When more observations are used for high dimensional problems, estimating the best model parameters of Gaussian Process model is still an essential yet challenging task due to considerable computation cost. One of the most commonly used methods to estimate model parameters is Maximum Likelihood Estimation (MLE). A common bottleneck arising in MLE is computing a log determinant and inverse over a large positive definite matrix. In this paper, a comparison of five commonly used gradient based and non-gradient based optimizers including Sequential Quadratic Programming (SQP), Quasi-Newton method, Interior Point method, Trust Region method and Pattern Line Search for likelihood function optimization of high dimension GP surrogate modeling problem is conducted. The comparison has been focused on the accuracy of estimation, the efficiency of computation and robustness of the method for different types of Kernel functions.


2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Hongmei Shi ◽  
Jinsong Yang ◽  
Jin Si

Many freight trains for special lines have in common the characteristics of a fixed group. Centralized Condition-Based Maintenance (CCBM) of key components, on the same freight train, can reduce maintenance costs and enhance transportation efficiency. To this end, an optimization algorithm based on the nonlinear Wiener process is proposed, for the prediction of the train wheels Remaining Useful Life (RUL) and the centralized maintenance timing. First, Hodrick–Prescott (HP) filtering algorithm is employed to process the raw monitoring data of wheel tread wear, extracting its trend components. Then, a nonlinear Wiener process model is constructed. Model parameters are calculated with a maximum likelihood estimation and the general deterioration parameters of wheel tread wear are obtained. Then, the updating algorithm for the drift coefficient is deduced using Bayesian formula. The online updating of the model is realized, based on individual wheel monitoring data, while a probability density function of individual wheel RUL is obtained. A prediction method of RUL for centralized maintenance is proposed, based on two set thresholds: “maintenance limit” and “the ratio of limit-arriving.” Meanwhile, a CCBM timing prediction algorithm is proposed, based on the expectation distribution of individual wheel RUL. Finally, the model is validated using a 500-day online monitoring data on a fixed group, consisting of 54 freight train cars. The validation result shows that the model can predict the wheels RUL of the train for CCBM. The proposed method can be used to predict the maintenance timing when there is a large number of components under the same working conditions and following the same path of degradation.


Sign in / Sign up

Export Citation Format

Share Document