Quantifying and Accounting for Uncertainty

Author(s):  
Yoram Rubin

This chapter deals with a wide range of issues with a common theme: coping with uncertainty. To this end, we look at the sources of uncertainty and the types of errors we need to deal with. We then explore methods for identifying these errors and for incorporating them into our predictions. This chapter extends our discussion on these topics in chapter 1, the discussion in chapter 3 on estimation under conditions of uncertainty, and image simulation using MC techniques. A comprehensive treatment of uncertainty needs to address two different types of errors. The first type is the model error, which arises from incorrect hypotheses and unmodeled processes (Gaganis and Smith, 2001), for example, from poor choice of governing equations, incorrect boundary conditions and zonation geometry, and inappropriate selection of forcing functions (Carrera and Neuman, 1986b). The second type of error is parameter error. The parameters of groundwater models are always in error because of measurement errors, heterogeneity, and scaling issues. Ignoring the effects of model and parameter errors is likely to lead to errors in model selection, in the estimation of prediction uncertainty, and in the assessment of risk. Parameter error is treated extensively in the literature: once a model is defined, it is common practice to quantify the errors associated with estimating its parameters (cf. Kitanidis and Vomvoris, 1983; Carrera and Neuman, 1986a, b; Rubin and Dagan, 1987a,b; McLaughlin and Townley, 1996; Poeter and Hill, 1997). Modeling error is well recognized, but is more difficult to quantify. Let us consider, for example, an aquifer which appears to be of uniform conductivity. Parameter error quantifies the error in estimating this conductivity. Modeling error, on the other hand, includes elusive factors such as missing a meandering channel somewhere in the aquifer. This, in essence, is the difficulty in determining modeling error; parameter error can be roughly quantified based on measurements if one assumes that the model is correct, but modeling error is expected to represent all that the measurements and/or the modeler fail to capture. To evaluate model error, the perfect model needs to be known, but this is never possible.

Author(s):  
Andrea Harris

This chapter explores the international and interdisciplinary backdrop of Lincoln Kirstein’s efforts to form an American ballet in the early 1930s. The political, economic, and cultural conditions of the Depression reinvigorated the search for an “American” culture. In this context, new openings for a modernist theory of ballet were created as intellectuals and artists from a wide range of disciplines endeavored to define the role of the arts in protecting against the dangerous effects of mass culture. Chapter 1 sheds new light on well-known critical debates in dance history between Kirstein and John Martin over whether ballet, with its European roots, could truly become “American” in contrast to modern dance. Was American dance going to be conceived in nationalist or transnationalist terms? That was the deeper conflict that underlay the ballet vs. modern dance debates of the early 1930s.


Metrologia ◽  
2021 ◽  
Author(s):  
Ralf D Geckeler ◽  
Matthias Schumann ◽  
Andreas Just ◽  
Michael Krause ◽  
Antti Lassila ◽  
...  

Abstract Autocollimators are versatile devices for angle metrology used in a wide range of applications in engineering and manufacturing. A modern electronic autocollimator generally features two measuring axes and can thus fully determine the surface normal of an optical surface relative to it in space. Until recently, however, the calibration capabilities of the national metrology institutes were limited to plane angles. Although it was possible to calibrate both measuring axes independently of each other, it was not feasible to determine their crosstalk if angular deflections were present in both axes simultaneously. To expand autocollimator calibrations from plane angles to spatial angles, PTB and VTT MIKES have created dedicated calibration devices which are based on different measurement principles and accomplish the task of measurand traceability in different ways. Comparing calibrations of a transfer standard makes it possible to detect systematic measurement errors of the two devices and to evaluate the validity of their uncertainty budgets. The importance of measurand traceability via calibration for a broad spectrum of autocollimator applications is one of the motivating factors behind the creation of both devices and for this comparison of the calibration capabilities of the two national metrology institutes. The latter is the focus of the work presented here.


2021 ◽  
Author(s):  
Charles Enweugwu ◽  
Aghogho Monorien ◽  
Ikechukwu Mbeledogu ◽  
Adewale Dosunmu ◽  
Omowunmi Illedare

Abstract Most unitized Pipelines in Nigeria are Trunk lines which take crude oil from flow stations to the Terminals. Very few International Oil and Gas Companies own and operate trunk lines in Nigeria. As a result, marginal field owners, independent producers, and some JV partners share the trunk line for the sale of their crude. But because of the use of wide range of non-compliant meters by the injectors into the trunk lines a lot of line losses due to measurement errors are introduced. Another major feature is that trunk lines are exposed to leakages due to sabotage, aged pipeline and valve failures. The issue here is how does the owner of the trunk line back allocate these losses to their respective injectors. The Reverse Mass Balanced Methodology (RMBM) is currently in use having replaced Interim Methodology (IM) in 2017. In RMBM, the crude trunk line losses have been found to be unaccountable and it's proportionate rule for distribution of the losses to the producers are inequitable as the field owners expressed dissatisfaction with unfair deduction from trunk line operators. This study developed a procedure and an algorithm for estimation of crude contributions from each producer at the Terminal and equitable distribution of crude trunk line losses to the producers irrespective of the type of meters, meter factor and leakages and sporadic theft on the trunk lines. This study also identified two alternatives to the RMBM, the use of Artificial Intelligence (AI) and Flow based models. The results showed that flow-based model accounts for both individual and group losses, not accounted for in the RMBM, and allocates and corrects for leak volumes at the point of leak instead of at the terminal. This is a significant improvement from the RMBM.


2021 ◽  
Vol 6 (5) ◽  
pp. 38-44
Author(s):  
O. D. Saliuk ◽  
◽  
P. H. Gerasimchuk ◽  
L. O. Zaitsev ◽  
I. I. Samoilenko ◽  
...  

In this article the review of foreign and domestic literary sources, which are devoted to the actual problem of modern dentistry – the treatment of inflammatory diseases of periodontal tissues: gingivitis and periodontitis are presented. The complex approach to their treatment involves the appointment of a significant amount of pharmacotherapeutic drugs. Therapeutic failures and iatrogenic complications have led to the fact that today the interests of doctors and population to medicinal products significantly increased. The purpose of the study is to analyze the data of scientific literature on the use of plant-based medicinal products for the treatment of periodontal inflammatory diseases over the past 10 years. Materials and methods. Comprehensive and systematic analysis of literature. Review and discussion. The analysis of information sources on the use of plant-based medicinal products in dentistry both independently and in the composition of medical and prophylactic means has established that the modern assortment of plant-based preparations in the pharmaceutical market of Ukraine to a certain extent is limited. The emergence of new plant-based species that have been tested in conditions of experimental pathology and require an evidence-based clinical base is noted. The composition of plant-based preparations used for the treatment of inflammatory periodontal diseases include vitamins, biologically active substances, glycosides, alkaloids, in connection with a wide range of action: antiseptic, anti-inflammatory, regenerating, hemostatic, antioxidative. The data on plant-based preparations that are most often used such as chamomile extracts, calendula, hypericum, plantain, kalanchoe, aloe, eucalyptus, milfoil, nettle, calamus and plant-based species are summarized. The medicinal agents considered are mainly recommended for local treatment of periodontal diseases in the form of dental care means, mouth rinse, gel, chewing gum, herbal liquer. It is known that the complex treatment of periodontal diseases includes a general influence on the body. The properties of green tea with its wide range of actions are investigated. With antioxidant properties, it can be a healthy alternative for controlling destructive changes in periodontal diseases. Attention is drawn to the proposed unique natural complex “Resverazin” due to a wide range of pharmacological action, low toxicity and relative safety. The drug produces antioxidant, anti-inflammatory, immune stimulating, vasodilative, neuroprotective action. Conclusion. Based on the literature analysis, it can be concluded that the accumulated experimental and clinical data on the therapeutic properties of plants prove perspective of their use in the complex treatment of inflammatory periodontal diseases. Future studies are mandatory for further confirmation of the effectiveness of these medicinal plants


Author(s):  
Patricia Penabad Durán ◽  
Paolo Di Barba ◽  
Xose Lopez-Fernandez ◽  
Janusz Turowski

Purpose – The purpose of this paper is to describe a parameter identification method based on multiobjective (MO) deterministic and non-deterministic optimization algorithms to compute the temperature distribution on transformer tank covers. Design/methodology/approach – The strategy for implementing the parameter identification process consists of three main steps. The first step is to define the most appropriate objective function and the identification problem is solved for the chosen parameters using single-objective (SO) optimization algorithms. Then sensitivity to measurement error of the computational model is assessed and finally it is included as an additional objective function, making the identification problem a MO one. Findings – Computations with identified/optimal parameters yield accurate results for a wide range of current values and different conductor arrangements. From the numerical solution of the temperature field, decisions on dimensions and materials can be taken to avoid overheating on transformer covers. Research limitations/implications – The accuracy of the model depends on its parameters, such as heat exchange coefficients and material properties, which are difficult to determine from formulae or from the literature. Thus the goal of the presented technique is to achieve the best possible agreement between measured and numerically calculated temperature values. Originality/value – Differing from previous works found in the literature, sensitivity to measurement error is considered in the parameter identification technique as an additional objective function. Thus, solutions less sensitive to measurement errors at the expenses of a degradation in accuracy are identified by means of MO optimization algorithms.


2019 ◽  
Vol 11 (6) ◽  
Author(s):  
John Papayanopoulos ◽  
Kevin Webb ◽  
Jonathan Rogers

Abstract Unmanned aerial vehicles are increasingly being tasked to connect to payload objects or docking stations for the purposes of package transport or recharging. However, autonomous docking creates challenges in that the air vehicle must precisely position itself with respect to the dock, oftentimes in the presence of uncertain winds and measurement errors. This paper describes an autonomous docking mechanism comprising a static ring and actuated legs, coupled with an infrared tracking device for closed-loop docking maneuvers. The dock’s unique mechanical design enables precise passive positioning such that the air vehicle slides into a precise location and orientation in the dock from a wide range of entry conditions. This leads to successful docking in the presence of winds and sensor measurement errors. A closed-loop infrared tracking system is also described in which the vehicle tracks an infrared beacon located on the dock during the descent to landing. A detailed analysis is presented describing the interaction dynamics between the aircraft and the dock, and system parameters are optimized through the use of trade studies and Monte Carlo analysis with a three degree-of-freedom simulation model. Experimental results are presented demonstrating successful docking maneuvers of an autonomous air vehicle in both indoor and outdoor environments. These repeatable docking experiments verify the robustness and practical utility of the dock design for a variety of emerging applications.


1993 ◽  
Vol 264 (6) ◽  
pp. E902-E911 ◽  
Author(s):  
D. C. Bradley ◽  
G. M. Steil ◽  
R. N. Bergman

We introduce a novel technique for estimating measurement error in time courses and other continuous curves. This error estimate is used to reconstruct the original (error-free) curve. The measurement error of the data is initially assumed, and the data are smoothed with "Optimal Segments" such that the smooth curve misses the data points by an average amount consistent with the assumed measurement error. Thus the differences between the smooth curve and the data points (the residuals) are tentatively assumed to represent the measurement error. This assumption is checked by testing the residuals for randomness. If the residuals are nonrandom, it is concluded that they do not resemble measurement error, and a new measurement error is assumed. This process continues reiteratively until a satisfactory (i.e., random) group of residuals is obtained. In this case the corresponding smooth curve is taken to represent the original curve. Monte Carlo simulations of selected typical situations demonstrated that this new method ("OOPSEG") estimates measurement error accurately and consistently in 30- and 15-point time courses (r = 0.91 and 0.78, respectively). Moreover, smooth curves calculated by OOPSEG were shown to accurately recreate (predict) original, error-free curves for a wide range of measurement errors (2-20%). We suggest that the ability to calculate measurement error and reconstruct the error-free shape of data curves has wide applicability in data analysis and experimental design.


1998 ◽  
Vol 15 (2) ◽  
pp. 129-137
Author(s):  
Sylvia J. Hunt

Although Muslim Communities in the New Europe is long and complex, it isnot obscure, and each of its sixteen chapters can be read as a separate entity. The contributors are seventeen academics from universities in various countries ofEastern and Western Europe, as well as the three editors who are based at threeEnglish universities. A short preface is followed by the first chapter, which isalso the first part of the book, appropriately titled “Themes and Puzzles.” Theremaining chapters examine selected countries individually in Eastern andWestem Europe in parts I1 and 111, respectively. Each chapter has helpful andclear endnotes, and a useful index is also included. Tables analyzing the Muslimpopulations in East European countries are given in chapter 2 and those ofBelgium and The Netherlands in chapter 10.In the Preface, the book is described as the “final outcome of a three-year project”to “produce a coherent comparative overview of. . . the role and positionof these Muslim communities.” The material was gathered from two internationalconferences on the subject and from researchers throughout Europe.Professor Gerd NoMeman modestly states: “This volume cannot claim to becomprehensive, but. . . it is hoped that it may contribute to a better understandingof the trends and dynamics involved, and provide the basis for further work.”Chapter 1 outlines the events leadiig up to the present general situation in thenew Europe. The continent is divided into (1) Eastern Europe, where, after thecollapse of Communism at the end of the 1980s. strong nationalist and religiousfeelings erupted; and (2) Western Europe, which, during a long economic recession,absorbed a sudden large influx of migrants from African and Asian countriessuffering serious political and economic upheaval.In parts I1 and 111 the contributors seek to answer a wide range of importantquestions concerning the relationship between Muslims and non-Muslims inEurope generally and between Muslims and non-Muslim governments in particular.How significant is the influence of history, the current economy, the originsof the Muslims and the level of their adherence to Islam, local and centralgovernment policies, local customs, international relations, public opinion, andso on? How does the reaction of the younger generation of Muslims to their situationcompare with that of their parents? Throughout the studies of the selectedcountries, the fear of the perceived loss of security and identity seems to beat the root of action and reaction by both Muslims and non-Muslims. How farcan the minority and majority societies adapt to each other without either sidelosing its identity and security? Possible solutions to the problems of integratingMuslims into non-Muslim societies are suggested by some of the contributors.Chapter 2 examines the links between religion and ethnicity in EasternEurope, where Islam has been “an indigenous presence for centuries.” AlthoughIslam is independent of race, color, and language, “around the fringes of theIslamic world” it is the basis of the identity of certain groups within nationalities,such as the Bosnian Muslims and Bulgarian Pomaks.The contributors then tackle one of the puzzles, that of how to define ethnicity.They descrike the current theories, which put varying emphasis on theobjective elements of kinship, physical appearance, culture, and language, andthe subjective elements, namely, the “feeling of community” and the “representationswhich the group has of itself” (p. 28) ...


2017 ◽  
Author(s):  
Zhikai Liang ◽  
Piyush Pandey ◽  
Vincent Stoerger ◽  
Yuhang Xu ◽  
Yumou Qiu ◽  
...  

ABSTRACTMaize (Zea mays ssp. mays) is one of three crops, along with rice and wheat, responsible for more than 1/2 of all calories consumed around the world. Increasing the yield and stress tolerance of these crops is essential to meet the growing need for food. The cost and speed of plant phenotyping is currently the largest constraint on plant breeding efforts. Datasets linking new types of high throughput phenotyping data collected from plants to the performance of the same genotypes under agronomic conditions across a wide range of environments are essential for developing new statistical approaches and computer vision based tools. A set of maize inbreds – primarily recently off patent lines – were phenotyped using a high throughput platform at University of Nebraska-Lincoln. These lines have been previously subjected to high density genotyping, and scored for a core set of 13 phenotypes in field trials across 13 North American states in two years by the Genomes to Fields consortium. A total of 485 GB of image data including RGB, hyperspectral, fluorescence and thermal infrared photos has been released. Correlations between image-based measurements and manual measurements demonstrated the feasibility of quantifying variation in plant architecture using image data. However, naive approaches to measuring traits such as biomass can introduce nonrandom measurement errors confounded with genotype variation. Analysis of hyperspectral image data demonstrated unique signatures from stem tissue. Integrating heritable phenotypes from high-throughput phenotyping data with field data from different environments can reveal previously unknown factors influencing yield plasticity.


2016 ◽  
Author(s):  
N. A. J. Schutgens ◽  
E. Gryspeerdt ◽  
N. Weigum ◽  
S. Tsyro ◽  
D. Goto ◽  
...  

Abstract. The spatial resolution of global climate models with interactive aerosol and the observations used to evaluate them is very different. Current models use grid-spacings of ∼ 200 km, while satellite observations of aerosol use so-called pixels of ∼ 10 km. Ground site or air-borne observations concern even smaller spatial scales. We study the errors incurred due to different resolutions by aggregating high-resolution simulations (10 km grid-spacing) over either the large areas of global model grid-boxes ("perfect" model data) or small areas corresponding to the pixels of satellite measurements or the field-of-view of ground-sites ("perfect" observations). Our analysis suggests that instantaneous RMS differences between these perfect observations and perfect global models can easily amount to 30–160%, for a range of observables like AOT (Aerosol Optical Thickness), extinction, black carbon mass concentrations, PM2.5, number densities and CCN (Cloud Condensation Nuclei). These differences, due entirely to different spatial sampling of models and observations, are often larger than measurement errors in real observations. Temporal averaging over a month of data reduces these differences more strongly for some observables (e.g. a three-fold reduction i.c. AOT), than for others (e.g. a two-fold reduction for surface black carbon concentrations), but significant RMS differences remain (10-75%). Note that this study ignores the issue of temporal sampling of real observations, which is likely to affect our present monthly error estimates. We examine several other strategies (e.g. spatial aggregation of observations, interpolation of model data) for reducing these differences and show their effectiveness. Finally, we examine consequences for the use of flight campaign data in global model evaluation and show that significant biases may be introduced depending on the flight strategy used.


Sign in / Sign up

Export Citation Format

Share Document