scholarly journals Axial variation in flexural stiffness of plant stem segments: measurement methods and the influence of measurement uncertainty

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Nathanael Martin-Nelson ◽  
Brandon Sutherland ◽  
Michael Yancey ◽  
Chung Shan Liao ◽  
Christopher J. Stubbs ◽  
...  

Abstract Background Flexural three-point bending tests are useful for characterizing the mechanical properties of plant stems. These tests can be performed with minimal sample preparation, thus allowing tests to be performed relatively quickly. The best-practice for such tests involves long spans with supports and load placed at nodes. This approach typically provides only one flexural stiffness measurement per specimen. However, by combining flexural tests with analytic equations, it is possible to solve for the mechanical characteristics of individual stem segments. Results A method is presented for using flexural tests to obtain estimates of flexural stiffness of individual segments. This method pairs physical test data with analytic models to obtain a system of equations. The solution of this system of equations provides values of flexural stiffness for individual stalk segments. Uncertainty in the solved values for flexural stiffness were found to be strongly dependent upon measurement errors. Row-wise scaling of the system of equations reduced the influence of measurement error. Of many possible test combinations, the most advantageous set of tests for performing these measurements were identified. Relationships between measurement uncertainty and solution uncertainty were provided for two different testing methods. Conclusions The methods presented in this paper can be used to measure the axial variation in flexural stiffness of plant stem segments. However, care must be taken to account for the influence of measurement error as the individual segment method amplifies measurement error. An alternative method involving aggregate flexural stiffness values does not amplify measurement error, but provides lower spatial resolution.

2017 ◽  
Author(s):  
Cheng Wu ◽  
Jian Zhen Yu

Abstract. Linear regression techniques are widely used in atmospheric science, but are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The regression techniques tested are Ordinary Least Square (OLS), Deming Regression (DR), Orthogonal Distance Regression (ODR), Weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne Twister (MT) pseudorandom number generator. The numerical simulations are also improved by: (a) refining the parameterization of non-linear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found an improper λ in DR can leads to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. With the knowledge of an appropriate weighting, DR, WODR and YR are recommended for atmospheric studies when both x and y data have measurement errors.


2021 ◽  
Vol 88 (2) ◽  
pp. 71-77
Author(s):  
Andreas Michael Müller ◽  
Tino Hausotte

Abstract The measurement uncertainty characteristics of a measurement system are an important parameter when evaluating the suitability of a certain measurement system for a specific measurement task. The measurement uncertainty can be calculated from observed measurement errors, which consist of both systematic and random components. While the unfavourable influence of systematic components can be compensated by calibration, random components are inherently not correctable. There are various measurement principles which are affected by different measurement error characteristics depending on specific properties of the measurement task, e. g. the optical surface properties of the measurement object when using fringe projection or the material properties when using industrial X-ray computed tomography. Thus, it can be helpful in certain scenarios if the spatial distribution of the acquisition quality as well as uncertainty characteristics on the captured surface of a certain measurement task can be found out. This article demonstrates a methodology to determine the random measurement error solely from a series of measurement repetitions without the need of additional information, e. g. a reference measurement or the nominal geometry of the examined part.


2018 ◽  
Vol 11 (2) ◽  
pp. 1233-1250 ◽  
Author(s):  
Cheng Wu ◽  
Jian Zhen Yu

Abstract. Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


2002 ◽  
pp. 323-332 ◽  
Author(s):  
A Sartorio ◽  
G De Nicolao ◽  
D Liberati

OBJECTIVE: The quantitative assessment of gland responsiveness to exogenous stimuli is typically carried out using the peak value of the hormone concentrations in plasma, the area under its curve (AUC), or through deconvolution analysis. However, none of these methods is satisfactory, due to either sensitivity to measurement errors or various sources of bias. The objective was to introduce and validate an easy-to-compute responsiveness index, robust in the face of measurement errors and interindividual variability of kinetics parameters. DESIGN: The new method has been tested on responsiveness tests for the six pituitary hormones (using GH-releasing hormone, thyrotrophin-releasing hormone, gonadotrophin-releasing hormone and corticotrophin-releasing hormone as secretagogues), for a total of 174 tests. Hormone concentrations were assayed in six to eight samples between -30 min and 120 min from the stimulus. METHODS: An easy-to-compute direct formula has been worked out to assess the 'stimulated AUC', that is the part of the AUC of the response curve depending on the stimulus, as opposed to pre- and post-stimulus spontaneous secretion. The weights of the formula have been reported for the six pituitary hormones and some popular sampling protocols. RESULTS AND CONCLUSIONS: The new index is less sensitive to measurement error than the peak value. Moreover, it provides results that cannot be obtained from a simple scaling of either the peak value or the standard AUC. Future studies are needed to show whether the reduced sensitivity to measurement error and the proportionality to the amount of released hormone render the stimulated AUC indeed a valid alternative to the peak value for the diagnosis of the different pathophysiological states, such as, for instance, GH deficits.


1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.


Dose-Response ◽  
2005 ◽  
Vol 3 (4) ◽  
pp. dose-response.0 ◽  
Author(s):  
Kenny S. Crump

Although statistical analyses of epidemiological data usually treat the exposure variable as being known without error, estimated exposures in epidemiological studies often involve considerable uncertainty. This paper investigates the theoretical effect of random errors in exposure measurement upon the observed shape of the exposure response. The model utilized assumes that true exposures are log-normally distributed, and multiplicative measurement errors are also log-normally distributed and independent of the true exposures. Under these conditions it is shown that whenever the true exposure response is proportional to exposure to a power r, the observed exposure response is proportional to exposure to a power K, where K < r. This implies that the observed exposure response exaggerates risk, and by arbitrarily large amounts, at sufficiently small exposures. It also follows that a truly linear exposure response will appear to be supra-linear—i.e., a linear function of exposure raised to the K-th power, where K is less than 1.0. These conclusions hold generally under the stated log-normal assumptions whenever there is any amount of measurement error, including, in particular, when the measurement error is unbiased either in the natural or log scales. Equations are provided that express the observed exposure response in terms of the parameters of the underlying log-normal distribution. A limited investigation suggests that these conclusions do not depend upon the log-normal assumptions, but hold more widely. Because of this problem, in addition to other problems in exposure measurement, shapes of exposure responses derived empirically from epidemiological data should be treated very cautiously. In particular, one should be cautious in concluding that the true exposure response is supra-linear on the basis of an observed supra-linear form.


2021 ◽  
Author(s):  
Simon Schüppler ◽  
Roman Zorn ◽  
Hagen Steger ◽  
Philipp Blum

&lt;p&gt;The measurement of the undisturbed ground temperature (UGT) serves to design low-temperature geothermal systems, in particular borehole heat exchangers (BHEs), and to monitor shallow aquifers. Wireless and miniaturized probes such as the Geosniff (GS) measurement sphere, which are characterized by an autarkic energy supply and equipped with pressure and temperature sensors, are increasingly being used for the measurement of highly resolved vertical temperature profiles. The measurement probe sinks along the course of the BHE with a selectable measurement frequency to the bottom of the BHE and is useable for initial measurements as well as long term groundwater monitoring. To ensure quality assurance and further improvement of this emerging technology, the analysis of measurement errors and uncertainties of wireless temperature measurements (WTMs) is indispensable. Thus, we provide an empirical laboratory analysis of random, systematic, and dynamic measurement errors, which lead to the measurement uncertainty of WTMs using the GS as a representative device. We subsequently transfer the analysed uncertainty to measured vertical temperature profiles of the undisturbed ground at a BHE site in Karlsruhe, Germany. The precision and accuracy of 0.011 K and -0.11 K, respectively, ensure a high reliability of the GS measurements. The largest measurement uncertainty is obtained within the first five meters of descent resulting from the thermal time constant &amp;#964; of 4 s. The measured temperature profiles are qualitatively compared with common Distributed Temperature Sensing (DTS) using fiber optic cables and punctual Pt-100 sensors. Wireless probes are also suitable to correct temperature profiles recorded with fiber optics with systematic errors of up to -0.93 K. Various boundary conditions such as the inclination of the BHE pipes or changes of the viscosity and density of the BHE fluid effect the descent rate of the GS of up to 40 %. We additionally provide recommendations for technical implementations of future measurement probes and contribute to an improved understanding and further development of WTMs.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document