error sources
Recently Published Documents


TOTAL DOCUMENTS

823
(FIVE YEARS 175)

H-INDEX

40
(FIVE YEARS 5)

Author(s):  
Manuel Rodrigues ◽  
Pierre Touboul ◽  
Gilles Metris ◽  
Alain Robert ◽  
Oceane Dhuicque ◽  
...  

Abstract The MICROSCOPE mission aims to test the Weak Equivalence Principle (WEP) in orbit with an unprecendented precision of 10-15 on the Eövös parameter thanks to electrostatic accelerometers on board a drag-free microsatellite. The precision of the test is determined by statistical errors, due to the environment and instrument noises, and by systematic errors to which this paper is devoted. Sytematic error sources can be divided into three categories: external perturbations, such as the residual atmospheric drag or the gravity gradient at the satellite altitude, perturbations linked to the satellite design, such as thermal or magnetic perturbations, and perturbations from the instrument internal sources. Each systematic error is evaluated or bounded in order to set a reliable upper bound on the WEP parameter estimation uncertainty.


2022 ◽  
Author(s):  
Mingjie Luo ◽  
Yinqiu Ji ◽  
Douglas W. Yu

The accurate extraction of species-abundance information from DNA-based data (metabarcoding, metagenomics) could contribute usefully to diet reconstruction and quantitative food webs, the inference of species interactions, the modelling of population dynamics and species distributions, the biomonitoring of environmental state and change, and the inference of false positives and negatives. However, capture bias, capture noise, species pipeline biases, and pipeline noise all combine to inject error into DNA-based datasets. We focus on methods for correcting the latter two error sources, as the first two are addressed extensively in the ecological literature. To extract abundance information, it is useful to distinguish two concepts. (1) Across-species quantification describes relative species abundances within one sample. (2) In contrast, within-species quantification describes how the abundance of each individual species varies from sample to sample, as in a time series, an environmental gradient, or different experimental treatments. Firstly, we review methods to remove species pipeline biases and pipeline noise. Secondly, we demonstrate experimentally (with a detailed protocol) how to use a 'DNA spike-in' to remove pipeline noise and recover within-species abundance information. We also introduce a statistical estimator that can partially remove pipeline noise from datasets that lack a physical DNA spike-in.


Author(s):  
Goeun Kim ◽  
In-Ung Song ◽  
Hagyong Kihm ◽  
Ho-Soon Yang

Abstract We propose an astigmatism correction method for the subaperture stitching Hindle test to measure hyperbolic convex aspheres. Astigmatic wavefront errors arise from misaligned Hindle setups, mechanical runout errors of the rotational motion for stitching, and the surface error of the target itself. Because these errors are combined, they cannot be separated in the conventional subaperture stitching Hindle tests. We exploited the rotational periodicity of each error to distinguish the surface figure error from other astigmatic error sources and rectified the Hindle test results with a third-order astigmatism. Using the subaperture stitching Hindle test, we averaged two sets of measurement data with a 180° rotational phase difference between them to calculate the astigmatic surface error. The proposed method was verified experimentally by comparing it with the results from a commercial stitching interferometer from QED Technologies; only subnanometer differences in the root-mean-square values were obtained. Therefore, the proposed method calibrated the system errors from the test surface wedge and the rotational decenter easily, thereby reducing the mechanical costs and alignment efforts and making it more accessible than a sophisticated mechanism.


Materials ◽  
2021 ◽  
Vol 15 (1) ◽  
pp. 279
Author(s):  
Farshad Abbasi ◽  
Alex Sarasua ◽  
Javier Trinidad ◽  
Nagore Otegi ◽  
Eneko Saenz de Argandoña ◽  
...  

Today’s stamping simulations are realized by ignoring the elastic deformation of the press and tooling system through the assumption of a rigid behavior and a perfect press stroke. However, in reality, the press and tool components deform elastically and are one of the major error sources for the final adjustment and blue-spotting of the dies. In order to tackle this issue, a new approach is proposed in this study that substitutes the press stiffness by means of a substitutive model composed of cost-effective shell and beam elements. The substitute model was calibrated using full-scale measurements, in which a 20,000 kN trial press was experimentally characterized by measuring its deformation under static loads. To examine the robustness of the substitute model, a medium-size tool and a large-size tool were simulated together with the substitutive model. To this end, a B-pillar tool was re-machined based on the substitute-model results and a new cambering procedure was proposed and validated throughout the blue-painting procedure. The newly developed substitute model was able to replicate the global stiffness of the press with a high accuracy and affordable calculation time. The implementation of the findings can aid toolmakers in eliminating most of the reworking and home-line trials.


2021 ◽  
Vol 96 (1) ◽  
Author(s):  
Iván Herrera Pinzón ◽  
Markus Rothacher ◽  
Stefan Riepl

AbstractThe precise estimation of geodetic parameters using single- and double-differenced SLR observations is investigated. While the differencing of observables is a standard approach for the GNSS processing, double differences of simultaneous SLR observations are practically impossible to obtain due to the SLR basic principle of observing one satellite at a time. Despite this, the availability of co-located SLR telescopes and the use of the alternative concept of quasi-simultaneity allow the forming of SLR differences under certain assumptions, thus enabling the use of these processing strategies. These differences are in principle almost free of both, satellite- and station-specific error sources, and are shown to be a valuable tool to obtain relative coordinates and range biases, and to validate local ties. Tested with the two co-located SLR telescopes at the Geodetic Observatory Wettzell (Germany) using SLR observations to GLONASS and LAGEOS, the developed differencing approach shows that it is possible to obtain single- and double-difference residuals at the millimetre level, and that it is possible to estimate parameters, such as range biases at the stations and the local baseline vector with a precision at the millimetre level and an accuracy comparable to traditional terrestrial survey methods. The presented SLR differences constitute a valuable alternative for the monitoring of the local baselines and the estimation of geodetic parameters.


Metrologia ◽  
2021 ◽  
Author(s):  
Ellie Molloy ◽  
Peter Saunders ◽  
Annette Koo

Abstract Goniometric measurements are essential for the determination of many optical quantities, and quantifying the effects of errors in the rotation axes on these quantities is a complex task. In this paper, we show how a measurement model for a four-axis goniometric system can be developed to allow the effects of alignment and rotation errors to be included in the uncertainty of the measurement. We use three different computational methods to propagate the uncertainties due to several error sources through the model to the rotation angles and then to the measurement of bidirectional reflectance and integrated diffuse reflectance, a task that would otherwise be intractable. While all three methods give the same result, the GTC Python package is the simplest and intrinsically provides a full uncertainty budget, including all correlations between measurement parameters. We then demonstrate how the development of a measurement model and the use of GTC has improved our understanding of the system. As a consequence, taking advantage of negative correlations between measurements in different geometries allows us to minimise the total uncertainty in integrated diffuse reflectance, lowering the standard uncertainty from 0.0029 to 0.0015.


Author(s):  
Américo Scotti ◽  
Márcio Andrade Batista ◽  
Mehdi Eshagh

AbstractPower is an indirect measurand, determined by processing voltage and current analogue signals through calculations. Using arc welding as a case study, the objective of this work was to bring up subsidies for power calculation. Based on the definitions of correlation and covariance in statistics, a mathematical demonstration was developed to point out the difference between the product of two averages (e.g. P = $$\overline{U} x \overline{I}$$ U ¯ x I ¯ ) and the average of the products (e.g. P = ($$\overline{UxI}$$ UxI ¯ ). Complementarily, a brief on U and I waveform distortion sources were discussed, emphasising the difference between signal standard deviations and measurement errors. It was demonstrated that the product of two averages is not the same as the average of the products, unless in specific conditions (when the variables are fully correlated). It was concluded that the statistical correlation can easily flag the interrelation, but if assisted by covariance, these statistics quantify the inaccuracy between approaches. Finally, although the statistics' determination is easy to implement, it is proposed that power should always be calculated as the average of the instantaneous U and I products. It is also proposed that measurement error sources should be observed and mitigated, since they predictably interfere in power calculation accuracy.


2021 ◽  
Vol 39 (4) ◽  
pp. 571-586
Author(s):  
German MORENO ◽  
Julio M. SINGER ◽  
Edward J. STANEK III

We develop best linear unbiased predictors (BLUP) of the latent values of labeled sample units selected from a finite population when there are two distinct sources of measurement error: endogenous, exogenous or both. Usual target parameters are the population mean, the latent values associated to a labeled unit or the latent value of the unit that will appear in a given position in the sample. We show how both types of measurement errors affect the within unit covariance matrices and indicate how the finite population BLUP may be obtained via standard software packages employed to fit mixed models in situations with either heteroskedastic or homoskedastic exogenous and endogenous measurement errors.


2021 ◽  
Vol 3 ◽  
Author(s):  
Simona Celi ◽  
Emanuele Vignali ◽  
Katia Capellini ◽  
Emanuele Gasparotti

The assessment of cardiovascular hemodynamics with computational techniques is establishing its fundamental contribution within the world of modern clinics. Great research interest was focused on the aortic vessel. The study of aortic flow, pressure, and stresses is at the basis of the understanding of complex pathologies such as aneurysms. Nevertheless, the computational approaches are still affected by sources of errors and uncertainties. These phenomena occur at different levels of the computational analysis, and they also strongly depend on the type of approach adopted. With the current study, the effect of error sources was characterized for an aortic case. In particular, the geometry of a patient-specific aorta structure was segmented at different phases of a cardiac cycle to be adopted in a computational analysis. Different levels of surface smoothing were imposed to define their influence on the numerical results. After this, three different simulation methods were imposed on the same geometry: a rigid wall computational fluid dynamics (CFD), a moving-wall CFD based on radial basis functions (RBF) CFD, and a fluid-structure interaction (FSI) simulation. The differences of the implemented methods were defined in terms of wall shear stress (WSS) analysis. In particular, for all the cases reported, the systolic WSS and the time-averaged WSS (TAWSS) were defined.


Sign in / Sign up

Export Citation Format

Share Document