scholarly journals Cloud and cluster computing in uncertainty analysis of integrated flood models

2012 ◽  
Vol 15 (1) ◽  
pp. 55-70 ◽  
Author(s):  
V. Moya Quiroga ◽  
I. Popescu ◽  
D. P. Solomatine ◽  
L. Bociort

There is an increased awareness of the importance of flood management aimed at preventing human and material losses. A wide variety of numerical modelling tools have been developed in order to make decision-making more efficient, and to better target management actions. Hydroinformatics assumes the holistic integrated approach to managing the information propagating through models, and analysis of uncertainty propagation through models is an important part of such studies. Many popular approaches to uncertainty analysis typically involve various strategies of Monte Carlo sampling of uncertain variables and/or parameters and running a model a large number of times, so that in the case of complex river systems this procedure becomes very time-consuming. In this study the popular modelling systems HEC-HMS, HEC-RAS and Sobek1D2D were applied to modelling the hydraulics of the Timis–Bega basin in Romania. We considered the problem of studying how the flood inundation is influenced by uncertainties in water levels of the reservoirs in the catchment, and uncertainties in the digital elevation model (DEM) used in the 2D hydraulic model. For this we used cloud computing (Amazon Elastic Compute Cloud platform) and cluster computing on the basis of a number of office desktop computers, and were able to show their efficiency, leading to a considerable reduction of the required computer time for uncertainty analysis of complex models. The conducted experiments allowed us to associate probabilities to various areas prone to flooding. This study allows us to draw a conclusion that cloud and cluster computing offer an effective and efficient technology that makes uncertainty-aware modelling a practical possibility even when using complex models.

Climate ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 144
Author(s):  
Harleen Kaur ◽  
Mohammad Afshar Alam ◽  
Saleha Mariyam ◽  
Bhavya Alankar ◽  
Ritu Chauhan ◽  
...  

Recently, awareness about the significance of water management has risen as population growth and global warming increase, and economic activities and land use continue to stress our water resources. In addition, global water sustenance efforts are crippled by capital-intensive water treatments and water reclamation projects. In this paper, a study of water bodies to predict the amount of water in each water body using identifiable unique features and to assess the behavior of these features on others in the event of shock was undertaken. A comparative study, using a parametric model, was conducted among Vector Autoregression (VAR), the Vector Error Correction Model (VECM), and the Long Short-Term Memory (LSTM) model for determining the change in water level and water flow of water bodies. Besides, orthogonalized impulse responses (OIR) and forecast error variance decompositions (FEVD) explaining the evolution of water levels and flow rates, the study shows the significance of VAR/VECM models over LSTM. It was found that on some water bodies, the VAR model gave reliable results. In contrast, water bodies such as water springs gave mixed results of VAR/VECM.


Author(s):  
Andrea Notaristefano ◽  
Paolo Gaetani ◽  
Vincenzo Dossena ◽  
Alberto Fusetti

Abstract In the frame of a continuous improvement of the performance and accuracy in the experimental testing of turbomachines, the uncertainty analysis on measurements instrumentation and techniques is of paramount importance. For this reason, since the beginning of the experimental activities at the Laboratory of Fluid Machines (LFM) located at Politecnico di Milano (Italy), this issue has been addressed and different methodologies have been applied. This paper proposes a comparison of the results collected applying two methods for the measurement uncertainty quantification to two different aerodynamic pressure probes: sensor calibration, aerodynamic calibration and probe application are considered. The first uncertainty evaluation method is the so called “Uncertainty Propagation” method (UPM); the second is based on the “Monte Carlo” method (MCM). Two miniaturized pressure probes have been selected for this investigation: a pneumatic 5-hole probe and a spherical fast response aerodynamic pressure probe (sFRAPP), the latter applied as a virtual 4-hole probe. Since the sFRAPP is equipped with two miniaturized pressure transducers installed inside the probe head, a specific calibration procedure and a dedicated uncertainty analysis are required.


Author(s):  
Seyede Fatemeh Ghoreishi ◽  
Mahdi Imani

Abstract Engineering systems are often composed of many subsystems that interact with each other. These subsystems, referred to as disciplines, contain many types of uncertainty and in many cases are feedback-coupled with each other. In designing these complex systems, one needs to assess the stationary behavior of these systems for the sake of stability and reliability. This requires the system level uncertainty analysis of the multidisciplinary systems, which is often computationally intractable. To overcome this issue, techniques have been developed for capturing the stationary behavior of the coupled multidisciplinary systems through available data of individual disciplines. The accuracy and convergence of the existing techniques depend on a large amount of data from all disciplines, which are not available in many practical problems. Toward this, we have developed an adaptive methodology that adds the minimum possible number of samples from individual disciplines to achieve an accurate and reliable uncertainty propagation in coupled multidisciplinary systems. The proposed method models each discipline function via Gaussian process (GP) regression to derive a closed-form policy. This policy sequentially selects a new sample point that results in the highest uncertainty reduction over the distribution of the coupling design variables. The effectiveness of the proposed method is demonstrated in the uncertainty analysis of an aerostructural system and a coupled numerical example.


2012 ◽  
Vol 24 (10) ◽  
pp. 2351-2354
Author(s):  
宋天明 Song Tianming ◽  
李三伟 Li Sanwei ◽  
易荣清 Yi Rongqing ◽  
杨家敏 Yang Jiamin ◽  
江少恩 Jiang Shao’en

2020 ◽  
Vol 12 (4) ◽  
pp. 705 ◽  
Author(s):  
Zhaoning Ma ◽  
Guorui Jia ◽  
Michael E. Schaepman ◽  
Huijie Zhao

Quantitative uncertainty analysis is generally taken as an indispensable step in the calibration of a remote sensor. A full uncertainty propagation chain has not been established to set up the metrological traceability for surface reflectance inversed from remotely sensed images. As a step toward this goal, we proposed an uncertainty analysis method for the two typical semi-empirical topographic correction models, i.e., C and Minnaert, according to the ‘Guide to the Expression of Uncertainty in Measurement (GUM)’. We studied the data link and analyzed the uncertainty propagation chain from the digital elevation model (DEM) and at-sensor radiance data to the topographic corrected radiance. We obtained spectral uncertainty characteristics of the topographic corrected radiance as well as its uncertainty components associated with all of the input quantities by using a set of Earth Observation-1 (EO-1) Hyperion data acquired over a rugged soil surface partly covered with snow. Firstly, the relative uncertainty of cover types with lower radiance values was larger for both C and Minnaert corrections. Secondly, the trend of at-sensor radiance contributed to a spectral feature, where the uncertainty of the topographic corrected radiance was poor in bands below 1400 nm. Thirdly, the uncertainty components associated with at-sensor radiance, slope, and aspect dominated the total combined uncertainty of corrected radiance. It was meaningful to reduce the uncertainties of at-sensor radiance, slope, and aspect for reducing the uncertainty of corrected radiance and improving the data quality. We also gave some suggestions to reduce the uncertainty of slope and aspect data.


2018 ◽  
Vol 15 (2) ◽  
pp. 171-217
Author(s):  
Hendrik Schoukens

The concept of adaptive management is generally defined as a flexible decision-making process that can be adjusted in the face of uncertainties as outcomes of management actions and other events become better understood. These experimental management strategies, which may grant permit agencies more discretion to authorise economic developments, have become increasingly popular as tools to overcome deadlock scenarios in the context of the EU Nature Directives. One notable application is the Dutch Programmatic Approach to Nitrogen (Programma Aanpak Stikstof – PAS ), which puts forward a more reconciliatory and integrated approach towards permitting additional nitrogen emissions in the vicinity of Natura 2000 sites. The purpose of this paper is to use the Dutch PAS as a benchmark to explore the margins available within the EU Nature Directives to implement more flexible adaptive management strategies. This paper argues that the Dutch PAS, especially taking into account the immediate trade-off that is provided between future restoration actions and ongoing harmful effects, appears to stand at odds with the substantive underpinning of the EU Nature Directives. As a result, its concrete application might be stalled through legal actions which advocate for a more restrictive approach to the authorization of additional impacts on vulnerable EU protected nature. It therefore remains highly doubtful whether the Dutch PAS is to be presented as a textbook example of a genuine sustainable management strategy within the context of EU environmental law.


Author(s):  
Markus Mäck ◽  
Michael Hanss

Abstract The early design stage of mechanical structures is often characterized by unknown or only partially known boundary conditions and environmental influences. Particularly, in the case of safety-relevant components, such as the crumple zone structure of a car, those uncertainties must be appropriately quantified and accounted for in the design process. For this purpose, possibility theory provides a suitable tool for the modeling of incomplete information and uncertainty propagation. However, the numerical propagation of uncertainty described by possibility theory is accompanied by high computational costs. The necessarily repeated model evaluations render the uncertainty analysis challenging to be realized if a model is complex and of large scale. Oftentimes, simplified and idealized models are used for the uncertainty analysis to speed up the simulation while accepting a loss of accuracy. The proposed multifidelity scheme for possibilistic uncertainty analysis, instead, takes advantage of the low costs of an inaccurate low-fidelity model and the accuracy of an expensive high-fidelity model. For this purpose, the functional dependency between the high- and low-fidelity model is exploited and captured in a possibilistic way. This results in a significant speedup for the uncertainty analysis while ensuring accuracy by using only a low number of expensive high-fidelity model evaluations. The proposed approach is applied to an automotive car crash scenario in order to emphasize its versatility and applicability.


Author(s):  
Z. Xia ◽  
J. Tang

Uncertainty analysis is an important part of structural dynamic analysis in various applications. When a large complex structure is under consideration, component mode synthesis (CMS) is frequently used for reduced-order numerical analysis. But even so, in some situations the computational costs are still high for repeated running of a computer code which is required in uncertainty analysis. Gaussian processes offer an emulation approach to realization of fast sampling over a given parameter configuration space. However, both the low-fidelity data obtained by CMS and the corresponding sample obtained by Gaussian process emulation need to be assessed by comparing with high-fidelity data which can be obtained but are usually very expensive. When obvious bias exist in the low-fidelity data, two-level Gaussian processes are introduced for processing both the low- and high-fidelity data simultaneously to make more accurate predictions of quantities of interest. CMS can serve not only to provide low-fidelity data but also to locate problematic areas on complex structures. Comparisons of the results obtained by Monte Carlo sampling, which is performed using both a full finite element model and a CMS model, indicate that two-level Gaussian processes can be an efficient tool to emulate high-fidelity sampling with guaranteed accuracy.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Thomas Frosio ◽  
Thomas Bonaccorsi ◽  
Patrick Blaise

A nuclear data-based uncertainty propagation methodology is extended to enable propagation of manufacturing/technological data (TD) uncertainties in a burn-up calculation problem, taking into account correlation terms between Boltzmann and Bateman terms. The methodology is applied to reactivity and power distributions in a Material Testing Reactor benchmark. Due to the inherent statistical behavior of manufacturing tolerances, Monte Carlo sampling method is used for determining output perturbations on integral quantities. A global sensitivity analysis (GSA) is performed for each manufacturing parameter and allows identifying and ranking the influential parameters whose tolerances need to be better controlled. We show that the overall impact of some TD uncertainties, such as uranium enrichment, or fuel plate thickness, on the reactivity is negligible because the different core areas induce compensating effects on the global quantity. However, local quantities, such as power distributions, are strongly impacted by TD uncertainty propagations. For isotopic concentrations, no clear trends appear on the results.


Sign in / Sign up

Export Citation Format

Share Document