Improving of the Type A Uncertainty Evaluation by Refining the Measurement Data from a Priori Unknown Systematic Influences

Author(s):  
Zygmunt L. Warsza ◽  
Jerzy M. Korczyński
Author(s):  
Alessandro Ferrero ◽  
Simona Salicone ◽  
Harsha Vardhana Jetti

Since the GUM has been published, measurement uncertainty has been defined in terms of the standard deviation of the probability distribution of the values that can be reasonably attributed to the measurand, and it has been evaluated using statistical or probabilistic methods. A debate has always been alive, among the metrologists, on whether a frequentist approach or a Bayesian approach should be followed to evaluate uncertainty. The Bayesian approach, based on some available a-priori knowledge about the measurand seems to prevail, nowadays. This paper starts from the consideration that the Bayesian approach is based on the well-known Bayes theorem that, as all mathematical theorems, is valid only to the extent the assumptions made to prove it are valid. The main question, when following the Bayesian approach, is hence whether these assumptions are satisfied in the practical cases, especially when the a-priori information is combined with the information coming from the measurement data to refine uncertainty evaluation. This paper will take into account some case studies to analyze when the Bayesian approach can be usefully and reliably employed by discussing the amount and pertinence of the available a-priori knowledge.


2007 ◽  
Vol 56 (8) ◽  
pp. 95-106 ◽  
Author(s):  
P. Grau ◽  
S. Beltrán ◽  
M. de Gracia ◽  
E. Ayesa

This paper proposes a new methodology for the automatic characterization of the influent wastewater in WWTP. With this methodology, model components are automatically estimated by means of optimization algorithms combining a-priori knowledge of the expected wastewater composition with experimental information from the available measurement data. The characterization is carried out based on an extended model components list in which components are described by means their elemental mass fractions. This allows an easy establishment of relationships between model components with experimental data and also, to obtain a general methodology applicable to any model used for wastewater biological treatments. The characterization of the wastewater influent of Galindo-Bilbao according this methodology has demonstrated its validity and the easy application to the ASM1 model influent characterization.


2014 ◽  
Vol 613 ◽  
pp. 173-181 ◽  
Author(s):  
Anton Ionov ◽  
Boris Ionov ◽  
Nadezhda Chernysheva ◽  
Egor Plotkin

The article is devoted to the suggested technique of on-line uncertainty calculation in non-contact temperature measurements, which can be used as a basic algorithm for smart measuring systems, e.g. intelligent radiation thermometers. As the initial data for uncertainty evaluation we use a priori information about heat detector characteristics, calibration curves along with their related uncertainties, estimated ambient temperature and external information of correction factor that should be inputted in a probabilistic form. We suggest utilizing models based on a characteristic function, in order to evaluate the combined uncertainty. In our opinion, the discussed principles are applicable for lots of other areas of measurement, especially, where it is critical to improve effectiveness of subsequent decision-making.


2017 ◽  
Author(s):  
Si-Wan Kim ◽  
Vijay Natraj ◽  
Seoyoung Lee ◽  
Hyeong-Ahn Kwon ◽  
Rokjin Park ◽  
...  

Abstract. Formaldehyde (HCHO) is either directly emitted from sources or produced during the oxidation of volatile organic compounds in the troposphere. It is possible to infer atmospheric HCHO concentrations using space-based observations, which may be useful for studying emissions and tropospheric chemistry at urban to global scales depending on the quality of the retrievals. In the near future, an unprecedented volume of satellite-based HCHO measurement data will be available from both geostationary and polar-orbiting platforms. Therefore, it is essential to develop retrieval methods appropriate for the next-generation satellites that measure at higher spatial and temporal resolution than the current ones. In this study, we examine the importance of fine spatial and temporal resolution a priori profile information on the retrieval by conducting approximately 45 000 radiative transfer model calculations in the Los Angeles Basin megacity. Our analyses suggest that an air mass factor (AMF, ratio of slant columns to vertical columns) based on fine spatial and temporal resolution a priori profiles can better capture the spatial distributions of the enhanced HCHO plumes in an urban area than the nearly constant AMFs used for current operational products. For this urban area, the AMF values are inversely proportional to the magnitude of the HCHO mixing ratios in the boundary layer. Using our optimized model HCHO results in the Los Angeles Basin that mimic the HCHO retrievals from future geostationary satellites, we illustrate the effectiveness of HCHO data from geostationary measurements for understanding and predicting tropospheric ozone and its precursors.


2008 ◽  
Vol 8 (6) ◽  
pp. 19063-19121 ◽  
Author(s):  
A. Stohl ◽  
P. Seibert ◽  
J. Arduini ◽  
S. Eckhardt ◽  
P. Fraser ◽  
...  

Abstract. A new analytical inversion method has been developed to determine the regional and global emissions of long-lived atmospheric trace gases. It exploits in situ measurement data from a global network and builds on backward simulations with a Lagrangian particle dispersion model. The emission information is extracted from the observed concentration increases over a baseline that is itself objectively determined by the inversion algorithm. The method was applied to two hydrofluorocarbons (HFC-134a, HFC-152a) and a hydrochlorofluorocarbon (HCFC-22) for the period January 2005 until March 2007. Detailed sensitivity studies with synthetic as well as with real measurement data were done to quantify the influence on the results of the a priori emissions and their uncertainties as well as of the observation and model errors. It was found that the global a posteriori emissions of HFC-134a, HFC-152a and HCFC-22 all increased from 2005 to 2006. Large increases (21%, 16%, 18%, respectively) from 2005 to 2006 were found for China, whereas the emission changes in North America and Europe were modest. For Europe, the a posteriori emissions of HFC-134a and HFC-152a were slightly higher than the a priori emissions reported to the United Nations Framework Convention on Climate Change (UNFCCC). For HCFC-22, the a posteriori emissions for Europe were substantially (by almost a factor 2) higher than the a priori emissions used, which were based on HCFC consumption data reported to the United Nations Environment Programme (UNEP). Combined with the reported strongly decreasing HCFC consumption in Europe, this suggests a substantial time lag between the reported timing of the HCFC-22 consumption and the actual timing of the HCFC-22 emission. Conversely, in China where HCFC consumption is increasing rapidly according to the UNEP data, the a posteriori emissions are only about 40% of the a priori emissions. This reveals a substantial storage of HCFC-22 and potential for future emissions in China. Deficiencies in the station locations of the current global network measuring halocarbons in relation to estimating regional emissions are also discussed in the paper. Applications of the inversion algorithm to other greenhouse gases such as methane, nitrous oxide or carbon dioxide are foreseen for the future.


Author(s):  
Yanyan Wu ◽  
Prabhjot Singh

Registration refers to the process of aligning corresponding features in images or point data sets in the same coordinate system. Multimodal inspection is a growing trend wherein an accurate measurement of the part is made by fusing data from different modalities. Registration is a key task in multimodal data fusion. The main problem with high-accuracy registration comes from noise inherent in the measurement data and the lack of the one-to-one correspondence in the data from different modalities. We present methods to deal with outliers and noise in the measurement data to improve registration accuracy. The proposed algorithms operate on point sets. Our method distinguishes between noise and accurate measurements using a new metric based on the intrinsic geometric characteristics of the point set, including distance, surface normal and curvature. Our method is unique in that it does not require a-priori knowledge of the noise in the measurement data, therefore fully automatic registration is enabled. The proposed methods can be incorporated into any point-based registration method. It was tested with the traditional ICP (Iterative Closest Point) algorithm with application to the data registration among point, image, and mesh data. The proposed method can be applied to both rigid and non-rigid registration.


2010 ◽  
Vol 22 (05) ◽  
pp. 351-365 ◽  
Author(s):  
Junpeng Zhang ◽  
Sarang S. Dalal ◽  
Srikantan S. Nagarajan ◽  
Dezhong Yao

In some cases, different brain regions give rise to strongly-coherent electrical neural activities. For example, pure tone evoked activations of the bilateral auditory cortices exhibit strong coherence. Conventional 2nd order statistics-based spatio-temporal algorithms, such as MUSIC (MUltiple SIgnal Classification) and beamforming encounter difficulties in localizing such activities. In this paper, we proposed a novel solution for this case. The key idea is to map the measurement data into a new data space through a transformation prior to the localization. The orthogonal complement of the lead field matrix for the region to be suppressed is generated as the transformation matrix. Using a priori knowledge or another independent imaging method, such as sLORETA (standard LOw REsolution brain electromagnetic TomogrAphy), the coherent source regions can be primarily identified. And then, in the transformed data space a conventional spatio-temporal method, such as MUSIC, can be used to accomplish the localization of the remaining coherent sources. Repeatedly applying the method will achieve localization of all the coherent sources. The algorithm was validated by simulation experiments as well as by the reconstructions of real bilateral auditory cortical coherent activities.


Sign in / Sign up

Export Citation Format

Share Document