scholarly journals Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology

2006 ◽  
Vol 3 (5) ◽  
pp. 3099-3146 ◽  
Author(s):  
M. Ratto ◽  
P. C. Young ◽  
R. Romanowicz ◽  
F. Pappenberge ◽  
A. Saltelli ◽  
...  

Abstract. In this paper, we discuss the problem of calibration and uncertainty estimation for hydrologic systems from two points of view: a bottom-up, reductionist approach; and a top-down, data-based mechanistic (DBM) approach. The two approaches are applied to the modelling of the River Hodder catchment in North-West England. The bottom-up approach is developed using the TOPMODEL, whose structure is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters; and the subsequent exercises in calibration and validation are carried out in the light of this sensitivity analysis. GSA helps to improve the calibration of hydrological models, making their properties more transparent and highlighting mis-specification problems. The DBM model provides a quick and efficient analysis of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. TOPMODEL calibration takes more time and it explains the flow data a little less well than the DBM model. The main differences in the modelling results are in the nature of the models and the flow decomposition they suggest. The "quick'' (63%) and "slow'' (37%) components of the decomposed flow identified in the DBM model show a clear partitioning of the flow, with the quick component apparently accounting for the effects of surface and near surface processes; and the slow component arising from the displacement of groundwater into the river channel (base flow). On the other hand, the two output flow components in TOPMODEL have a different physical interpretation, with a single flow component (95%) accounting for both slow (subsurface) and fast (surface) dynamics, while the other, very small component (5%) is interpreted as an instantaneous surface runoff generated by rainfall falling on areas of saturated soil. The results of the exercise show that the two modelling methodologies have good synergy; combining well to produce a complete modelling approach that has the kinds of checks-and-balances required in practical data-based modelling of rainfall-flow systems. Such a combined approach also produces models that are suitable for different kinds of application. As such, the DBM model can provides an immediate vehicle for flow and flood forecasting; while TOPMODEL, suitably calibrated (and perhaps modified) in the light of the DBM and GSA results, immediately provides a simulation model with a variety of potential applications, in areas such as catchment management and planning.

2007 ◽  
Vol 11 (4) ◽  
pp. 1249-1266 ◽  
Author(s):  
M. Ratto ◽  
P. C. Young ◽  
R. Romanowicz ◽  
F. Pappenberger ◽  
A. Saltelli ◽  
...  

Abstract. In this paper, we discuss a joint approach to calibration and uncertainty estimation for hydrologic systems that combines a top-down, data-based mechanistic (DBM) modelling methodology; and a bottom-up, reductionist modelling methodology. The combined approach is applied to the modelling of the River Hodder catchment in North-West England. The top-down DBM model provides a well identified, statistically sound yet physically meaningful description of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. These characteristics are defined inductively from the data without prior assumptions about the model structure, other than it is within the generic class of nonlinear differential-delay equations. The bottom-up modelling is developed using the TOPMODEL, whose structure is assumed a priori and is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters. The subsequent exercises in calibration and validation, performed with Generalized Likelihood Uncertainty Estimation (GLUE), are carried out in the light of the GSA and DBM analyses. This allows for the pre-calibration of the the priors used for GLUE, in order to eliminate dynamical features of the TOPMODEL that have little effect on the model output and would be rejected at the structure identification phase of the DBM modelling analysis. In this way, the elements of meaningful subjectivity in the GLUE approach, which allow the modeler to interact in the modelling process by constraining the model to have a specific form prior to calibration, are combined with other more objective, data-based benchmarks for the final uncertainty estimation. GSA plays a major role in building a bridge between the hypothetico-deductive (bottom-up) and inductive (top-down) approaches and helps to improve the calibration of mechanistic hydrological models, making their properties more transparent. It also helps to highlight possible mis-specification problems, if these are identified. The results of the exercise show that the two modelling methodologies have good synergy; combining well to produce a complete joint modelling approach that has the kinds of checks-and-balances required in practical data-based modelling of rainfall-flow systems. Such a combined approach also produces models that are suitable for different kinds of application. As such, the DBM model considered in the paper is developed specifically as a vehicle for flow and flood forecasting (although the generality of DBM modelling means that a simulation version of the model could be developed if required); while TOPMODEL, suitably calibrated (and perhaps modified) in the light of the DBM and GSA results, immediately provides a simulation model with a variety of potential applications, in areas such as catchment management and planning.


Water ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 1682
Author(s):  
Yoonja Kang ◽  
Yeongji Oh

The interactive roles of zooplankton grazing (top-down) and nutrient (bottom-up) processes on phytoplankton distribution in a temperate estuary were investigated via dilution and nutrient addition experiments. The responses of size-fractionated phytoplankton and major phytoplankton groups, as determined by flow cytometry, were examined in association with zooplankton grazing and nutrient availability. The summer bloom was attributed to nanoplankton, and microplankton was largely responsible for the winter bloom, whereas the picoplankton biomass was relatively consistent throughout the sampling periods, except for the fall. The nutrient addition experiments illustrated that nanoplankton responded more quickly to phosphate than the other groups in the summer, whereas microplankton had a faster response to most nutrients in the winter. The dilution experiments ascribed that the grazing mortality rates of eukaryotes were low compared to those of the other groups, whereas autotrophic cyanobacteria were more palatable to zooplankton than cryptophytes and eukaryotes. Our experimental results indicate that efficient escape from zooplankton grazing and fast response to nutrient availability synergistically caused the microplankton to bloom in the winter, whereas the bottom-up process (i.e., the phosphate effect) largely governed the nanoplankton bloom in the summer.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. WA179-WA188 ◽  
Author(s):  
Alan Yusen Ley-Cooper ◽  
James Macnae ◽  
Andrea Viezzoli

Most airborne electromagnetic (AEM) data are processed using successive 1D approximations to produce stitched conductivity-depth sections. Because the current induced in the near surface by an AEM system preferentially circulates at some radial distance from a horizontal loop transmitter (sometimes called the footprint), the section plotted directly below a concentric transmitter-receiver system actually arises from currents induced in the vicinity rather than directly underneath. Detection of paleochannels as conduits for groundwater flow is a common geophysical exploration goal, where locally 2D approximations may be valid for an extinct riverbed or filled valley. Separate from effects of salinity, these paleochannels may be conductive if clay filled or resistive if sand filled and incised into a clay host. Because of the wide system footprint, using stitched 1D approximations or inversions may lead to misleading conductivity-depth images or sections. Near abrupt edges of an extensive conductive layer, the lateral falloff in AEM amplitudes tends to produce a drooping tail in a conductivity section, sometimes coupled with alocal peak where the AEM system is maximally coupled to currents constrained to flow near the conductor edge. Once the width of a conductive ribbon model is less than the system footprint, small amplitudes result, and the source is imaged too deeply in the stitched 1D section. On the other hand, a narrow resistive gap in a conductive layer is incorrectly imaged as a drooping region within the layered conductor; below, the image falsely contains a blocklike poor conductor extending to depth. Additionally, edge-effect responses often are imaged as deep conductors with an inverted horseshoe shape. Incorporating lateral constraints in 1D AEM inversion (LCI) software, designed to improve resolution of continuous layers, more accurately recovers the depth to extensive conductors. The LCI, however, as with any AEM modeling methodology based on 1D forward responses, has limitations in detecting and imaging in the presence of strong 3D lateral discontinuities of dimensions smaller than the annulus of resolution. The isotropic, horizontally slowly varying layered-earth assumption devalues and limits AEM’s 3D detection capabilities. The need for smart, fast algorithms that account for 3D varying electrical properties remains.


2018 ◽  
Vol 859 ◽  
pp. 516-542 ◽  
Author(s):  
Calum S. Skene ◽  
Peter J. Schmid

A linear numerical study is conducted to quantify the effect of swirl on the response behaviour of premixed lean flames to general harmonic excitation in the inlet, upstream of combustion. This study considers axisymmetric M-flames and is based on the linearised compressible Navier–Stokes equations augmented by a simple one-step irreversible chemical reaction. Optimal frequency response gains for both axisymmetric and non-axisymmetric perturbations are computed via a direct–adjoint methodology and singular value decompositions. The high-dimensional parameter space, containing perturbation and base-flow parameters, is explored by taking advantage of generic sensitivity information gained from the adjoint solutions. This information is then tailored to specific parametric sensitivities by first-order perturbation expansions of the singular triplets about the respective parameters. Valuable flow information, at a negligible computational cost, is gained by simple weighted scalar products between direct and adjoint solutions. We find that for non-swirling flows, a mode with azimuthal wavenumber $m=2$ is the most efficiently driven structure. The structural mechanism underlying the optimal gains is shown to be the Orr mechanism for $m=0$ and a blend of Orr and other mechanisms, such as lift-up, for other azimuthal wavenumbers. Further to this, velocity and pressure perturbations are shown to make up the optimal input and output showing that the thermoacoustic mechanism is crucial in large energy amplifications. For $m=0$ these velocity perturbations are mainly longitudinal, but for higher wavenumbers azimuthal velocity fluctuations become prominent, especially in the non-swirling case. Sensitivity analyses are carried out with respect to the Mach number, Reynolds number and swirl number, and the accuracy of parametric gradients of the frequency response curve is assessed. The sensitivity analysis reveals that increases in Reynolds and Mach numbers yield higher gains, through a decrease in temperature diffusion. A rise in mean-flow swirl is shown to diminish the gain, with increased damping for higher azimuthal wavenumbers. This leads to a reordering of the most effectively amplified mode, with the axisymmetric ($m=0$) mode becoming the dominant structure at moderate swirl numbers.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Ahmad Gholami ◽  
Jassem Azizpoor ◽  
Elham Aflaki ◽  
Mehdi Rezaee ◽  
Khosro Keshavarz

Introduction. Rheumatoid arthritis (RA) is a chronic progressive inflammatory disease that causes joint destruction. The condition imposes a significant economic burden on patients and societies. The present study is aimed at evaluating the cost-effectiveness of Infliximab, Adalimumab, and Etanercept in treating rheumatoid arthritis in Iran. Methods. This is a cost-effectiveness study of economic evaluation in which the Markov model was used. The study was carried out on 154 patients with rheumatoid arthritis in Fars province taking Infliximab, Adalimumab, and Etanercept. The patients were selected through sampling. In this study, the cost data were collected from a community perspective, and the outcomes were the mean reductions in DAS-28 and QALY. The cost data collection form and the EQ-5D questionnaire were also used to collect the required data. The results were presented in the form of an incremental cost-effectiveness ratio, and the sensitivity analysis was used to measure the robustness of the study results. The TreeAge Pro and Excel softwares were used to analyze the collected data. Results. The results showed that the mean costs and the QALY rates in the Infliximab, Adalimumab, and Etanercept arms were $ 79,518.33 and 12.34, $ 91,695.59 and 13.25, and $ 87,440.92 and 11.79, respectively. The one-way sensitivity analysis confirmed the robustness of the results. In addition, the results of the probabilistic sensitivity analysis (PSA) indicated that on the cost-effectiveness acceptability curve, Infliximab was in the acceptance area and below the threshold in 77% of simulations. The scatter plot was in the mentioned area in 81% and 91% of simulations compared with Adalimumab and Etanercept, respectively, implying lower costs and higher effectiveness than the other two alternatives. Therefore, the strategy was more cost-effective. Conclusion. According to the results of this study, Infliximab was more cost-effective than the other two medications. Therefore, it is recommended that physicians use this medication as the priority in treating rheumatoid arthritis. It is also suggested that health policymakers consider the present study results in preparing treatment guidelines for RA.


2020 ◽  
Author(s):  
Susana Barbosa ◽  
Mauricio Camilo ◽  
Carlos Almeida ◽  
José Almeida ◽  
Guilherme Amaral ◽  
...  

<p><span>The study of the electrical properties of the atmospheric marine boundary layer is important as the effect of natural radioactivity in driving near surface ionisation is significantly reduced over the ocean, and the concentration of aerosols is also typically lower than over continental areas, allowing a clearer examination of space-atmosphere interactions. Furthermore, cloud cover over the ocean is dominated by low-level clouds and most of the atmospheric charge lies near the earth surface, at low altitude cloud tops. </span></p><p><span>The relevance of electric field observations in the marine boundary layer is enhanced by the the fact that the electrical conductivity of the ocean air is clearly linked to global atmospheric pollution and aerosol content. The increase in aerosol pollution since the original observations made in the early 20th century by the survey ship Carnegie is a pressing and timely motivation for modern measurements of the atmospheric electric field in the marine boundary layer. Project SAIL (Space-Atmosphere-Ocean Interactions in the marine boundary Layer) addresses this challenge by means of an unique monitoring campaign on board the ship-rigged sailing ship NRP Sagres during its 2020 circumnavigation expedition. </span></p><p><span>The Portuguese Navy ship NRP Sagres departed from Lisbon on January 5th in a journey around the globe that will take 371 days. Two identical field mill sensors (CS110, Campbell Scientific) are installed </span><span>o</span><span>n the mizzen mast, one at a height of 22 m, and the other at a height of 5 meters. </span><span>A visibility sensor (SWS050, Biral) was also set-up on the same mast in order to have measurements of the extinction coefficient of the atmosphere and assess fair-weather conditions.</span><span> Further observations include gamma radiation measured with a NaI(Tl) scintillator from 475 keV to 3 MeV, cosmic radiation up to 17 MeV, and atmospheric ionisation from a cluster ion counter (Airel). The</span><span> 1 Hz measurements of the atmospheric electric field</span><span> and from all the other sensors</span><span> are </span><span>linked to the same rigorous temporal reference frame and precise positioning through kinematic GNSS observations. </span></p><p><span>Here the first results of the SAIL project will be presented, focusing on fair-weather electric field over the Atlantic. The observations obtained in the first three sections of the circumnavigation journey, including Lisbon (Portugal) - Tenerife (Spain), from 5 to 10 January, Tenerife - Praia (Cape Verde) from 13 to 19 January, and across the Atlantic from Cape Verde to Rio de Janeiro (Brasil), from January 22nd to February 14th, will be presented and discussed.</span></p>


1991 ◽  
Vol 81 (3) ◽  
pp. 796-817
Author(s):  
Nitzan Rabinowitz ◽  
David M. Steinberg

Abstract We propose a novel multi-parameter approach for conducting seismic hazard sensitivity analysis. This approach allows one to assess the importance of each input parameter at a variety of settings of the other input parameters and thus provides a much richer picture than standard analyses, which assess each input parameter only at the default settings of the other parameters. We illustrate our method with a sensitivity analysis of seismic hazard for Jerusalem. In this example, we find several input parameters whose importance depends critically on the settings of other input parameters. This phenomenon, which cannot be detected by a standard sensitivity analysis, is easily diagnosed by our method. The multi-parameter approach can also be used in the context of a probabilistic assessment of seismic hazard that incorporates subjective probability distributions for the input parameters.


Author(s):  
Sarbani Basu ◽  
William J. Chaplin

This chapter considers some of the fundamentals associated with the basic datasets from which the asteroseismic and other intrinsic stellar parameters are extracted (usually lightcurves of photometric observations or time series of Doppler velocity observations). In particular, the chapter looks at how the observational technique affects the amplitudes of the observed oscillations. It also introduces the other intrinsic stellar signals that manifest in the data, specifically those due to granulation (signatures of near-surface convection) and magnetic activity. The chapter's aim is to familiarize the reader with the basic content of the typical data and lay some important groundwork for the detailed presentations that follow in the next two chapters.


Author(s):  
Hiroki Fukushima

In this chapter, the author attempts to define the verbs in the description of Japanese sake taste by employing 1) a usage-based approach, 2) “encyclopedic semantics” rather than a “dictionary view,” and 3) sense-making theory, drawing on data from a “sake tasting description corpus” (approximately 120,000 words). The chapter selects eight verbs of high frequency (e.g., hirogaru ‘spread') and defines their sense(s) in a bottom-up and abductive fashion, based on a score indicating the strength of co-occurrence between terms. In this study, the authors deal with the verbs for “Understanding” or “Interpretation ”; it means, verbs that contribute to narrating the personal, individual story (contents) of the tasters. This study suggests the verbs for understanding have senses related to [Timeline] and [Space]. On the other hand, verbs do not tend to collocate with [Movement] and interestingly, the [Structure], as same as the tendency of adjectival-nouns.


2017 ◽  
pp. 681-691
Author(s):  
Nilanjan Ghosh ◽  
Somnath Hazra

This chapter compares two quantitative frameworks, namely, Computable General Equilibrium (CGE) and Econometric models to study the impacts of climate change on human economy. However, as is inferred from this chapter, CGE framework is fraught with unrealistic assumptions, and fails to capture impacts of climate change and extreme events on the ecosystem services. On the other hand, econometric framework can be customised and is not based on the unrealistic assumptions like CGE. The various advantages and disadvantages of the two methods have been discussed critically in the process in this chapter in light of the avowed objective of understanding sustainability science.


Sign in / Sign up

Export Citation Format

Share Document