scholarly journals Correction of Measurement Error in Monthly USDA Pig Crop: Generating Alternative Data Series

2008 ◽  
Vol 40 (2) ◽  
pp. 697-711 ◽  
Author(s):  
In-Seck Kim ◽  
Ronald L. Plain ◽  
J. Bruce Bullock ◽  
Sang Young Jei

The imputed pig death loss contained in the reported monthly U.S. Department of Agriculture (USDA) pig crop data over the December 1995-June 2006 period ranged from —4.93% to 12.75%. Clearly, there are substantial measurement errors in the USDA monthly pig crop data. In this paper, we present alternative monthly U.S. pig crop data using the biological production process, which is compatible with prior knowledge of the U.S. hog industry. Alternative pig crop data are applied to a slaughter hog model and tested comparatively to USDA pig crop. Test results reject the validity of USDA pig crop data in favor of the alternative data.

2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


Author(s):  
Mohammad-Reza Ashory ◽  
Farhad Talebi ◽  
Heydar R Ghadikolaei ◽  
Morad Karimpour

This study investigated the vibrational behaviour of a rotating two-blade propeller at different rotational speeds by using self-tracking laser Doppler vibrometry. Given that a self-tracking method necessitates the accurate adjustment of test setups to reduce measurement errors, a test table with sufficient rigidity was designed and built to enable the adjustment and repair of test components. The results of the self-tracking test on the rotating propeller indicated an increase in natural frequency and a decrease in the amplitude of normalized mode shapes as rotational speed increases. To assess the test results, a numerical model created in ABAQUS was used. The model parameters were tuned in such a way that the natural frequency and associated mode shapes were in good agreement with those derived using a hammer test on a stationary propeller. The mode shapes obtained from the hammer test and the numerical (ABAQUS) modelling were compared using the modal assurance criterion. The examination indicated a strong resemblance between the hammer test results and the numerical findings. Hence, the model can be employed to determine the other mechanical properties of two-blade propellers in test scenarios.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Javier Ho ◽  
Paul Bernal

AbstractThis study attempts to fit a global demand model for soybean traffic through the Panama Canal using Ordinary Least Square. Most of the soybean cargo through the interoceanic waterway is loaded on the U.S. Gulf and East Coast ports -mainly destined to East Asia, especially China-, and represented about 34% of total Panama Canal grain traffic between fiscal years 2010–19. To estimate the global demand model for soybean traffic, we are considering explanatory variables such as effective toll rates through the Panama Canal, U.S. Gulf- Asia and U.S. Pacific Northwest- Asia freight rates, Baltic Dry Index, bunker costs, soybean export inspections from the U.S. Gulf and Pacific Northwest, U.S. Gulf soybean basis levels, Brazil’s soybean exports and average U.S. dollar index. As part of the research, we are pursuing the estimation of the toll rate elasticity of vessels transporting soybeans via the Panama Canal. Data come mostly from several U.S. Department of Agriculture sources, Brazil’s Secretariat of Foreign Trade (SECEX) and from Panama Canal transit information. Finally, after estimation of the global demand model for soybean traffic, we will discuss the implications for future soybean traffic through the waterway, evaluating alternative routes and sources for this trade.


2002 ◽  
pp. 323-332 ◽  
Author(s):  
A Sartorio ◽  
G De Nicolao ◽  
D Liberati

OBJECTIVE: The quantitative assessment of gland responsiveness to exogenous stimuli is typically carried out using the peak value of the hormone concentrations in plasma, the area under its curve (AUC), or through deconvolution analysis. However, none of these methods is satisfactory, due to either sensitivity to measurement errors or various sources of bias. The objective was to introduce and validate an easy-to-compute responsiveness index, robust in the face of measurement errors and interindividual variability of kinetics parameters. DESIGN: The new method has been tested on responsiveness tests for the six pituitary hormones (using GH-releasing hormone, thyrotrophin-releasing hormone, gonadotrophin-releasing hormone and corticotrophin-releasing hormone as secretagogues), for a total of 174 tests. Hormone concentrations were assayed in six to eight samples between -30 min and 120 min from the stimulus. METHODS: An easy-to-compute direct formula has been worked out to assess the 'stimulated AUC', that is the part of the AUC of the response curve depending on the stimulus, as opposed to pre- and post-stimulus spontaneous secretion. The weights of the formula have been reported for the six pituitary hormones and some popular sampling protocols. RESULTS AND CONCLUSIONS: The new index is less sensitive to measurement error than the peak value. Moreover, it provides results that cannot be obtained from a simple scaling of either the peak value or the standard AUC. Future studies are needed to show whether the reduced sensitivity to measurement error and the proportionality to the amount of released hormone render the stimulated AUC indeed a valid alternative to the peak value for the diagnosis of the different pathophysiological states, such as, for instance, GH deficits.


1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.


Dose-Response ◽  
2005 ◽  
Vol 3 (4) ◽  
pp. dose-response.0 ◽  
Author(s):  
Kenny S. Crump

Although statistical analyses of epidemiological data usually treat the exposure variable as being known without error, estimated exposures in epidemiological studies often involve considerable uncertainty. This paper investigates the theoretical effect of random errors in exposure measurement upon the observed shape of the exposure response. The model utilized assumes that true exposures are log-normally distributed, and multiplicative measurement errors are also log-normally distributed and independent of the true exposures. Under these conditions it is shown that whenever the true exposure response is proportional to exposure to a power r, the observed exposure response is proportional to exposure to a power K, where K < r. This implies that the observed exposure response exaggerates risk, and by arbitrarily large amounts, at sufficiently small exposures. It also follows that a truly linear exposure response will appear to be supra-linear—i.e., a linear function of exposure raised to the K-th power, where K is less than 1.0. These conclusions hold generally under the stated log-normal assumptions whenever there is any amount of measurement error, including, in particular, when the measurement error is unbiased either in the natural or log scales. Equations are provided that express the observed exposure response in terms of the parameters of the underlying log-normal distribution. A limited investigation suggests that these conclusions do not depend upon the log-normal assumptions, but hold more widely. Because of this problem, in addition to other problems in exposure measurement, shapes of exposure responses derived empirically from epidemiological data should be treated very cautiously. In particular, one should be cautious in concluding that the true exposure response is supra-linear on the basis of an observed supra-linear form.


Sign in / Sign up

Export Citation Format

Share Document