Application of distributed sensitivity analysis to a model of turbulent open channel flow in a natural river channel

Sensitivity analysis is a widely applied tool used to investigate the predictions of numerical models of environmental processes. This paper illustrates the importance of undertaking a sensitivity analysis that considers the spatially distributed nature of model predictions, rather than simply assessing the sensitivity of one or more model parameters that are assumed to represent the distributed behaviour of the system. A spatially distributed sensitivity analysis is applied to the output from a distributed model of turbulent river flow, used to simulate the flow processes in a natural river channel bifurcation. Examples are provided for an input parameter which illustrates the importance of sensitivity analysis with respect to model assessment and error analysis. The distributed nature of the analysis suggests the importance of spatial feedback in environmental systems that more traditional approaches to sensitivity analysis cannot reveal.

2020 ◽  
Vol 13 (10) ◽  
pp. 4691-4712
Author(s):  
Chia-Te Chien ◽  
Markus Pahlow ◽  
Markus Schartau ◽  
Andreas Oschlies

Abstract. We analyse 400 perturbed-parameter simulations for two configurations of an optimality-based plankton–ecosystem model (OPEM), implemented in the University of Victoria Earth System Climate Model (UVic-ESCM), using a Latin hypercube sampling method for setting up the parameter ensemble. A likelihood-based metric is introduced for model assessment and selection of the model solutions closest to observed distributions of NO3-, PO43-, O2, and surface chlorophyll a concentrations. The simulations closest to the data with respect to our metric exhibit very low rates of global N2 fixation and denitrification, indicating that in order to achieve rates consistent with independent estimates, additional constraints have to be applied in the calibration process. For identifying the reference parameter sets, we therefore also consider the model's ability to represent current estimates of water-column denitrification. We employ our ensemble of model solutions in a sensitivity analysis to gain insights into the importance and role of individual model parameters as well as correlations between various biogeochemical processes and tracers, such as POC export and the NO3- inventory. Global O2 varies by a factor of 2 and NO3- by more than a factor of 6 among all simulations. Remineralisation rate is the most important parameter for O2, which is also affected by the subsistence N quota of ordinary phytoplankton (Q0,phyN) and zooplankton maximum specific ingestion rate. Q0,phyN is revealed as a major determinant of the oceanic NO3- pool. This indicates that unravelling the driving forces of variations in phytoplankton physiology and elemental stoichiometry, which are tightly linked via Q0,phyN, is a prerequisite for understanding the marine nitrogen inventory.


2020 ◽  
Author(s):  
Lucie Pheulpin ◽  
Vito Bacchi

<p>Hydraulic models are increasingly used to assess the flooding hazard. However, all numerical models are affected by uncertainties, related to model parameters, which can be quantified through Uncertainty Quantification (UQ) and Global Sensitivity Analysis (GSA). In traditional methods of UQ and GSA, the input parameters of the numerical models are considered to be independent which is actually rarely the case. The objective of this work is to proceed with UQ and GSA methods considering dependent inputs and comparing different methodologies. At our knowledge, there is no such application in the field of 2D hydraulic modelling.</p><p>At first the uncertain parameters of the hydraulic model are classified in groups of dependent parameters. Within this aim, it is then necessary to define the copulas that better represent these groups. Finally UQ and GSA based on copulas are performed. The proposed methodology is applied to the large scale 2D hydraulic model of the Loire River. However, as the model computation is high time-consuming, we used a meta-model instead of the initial model. We compared the results coming from the traditional methods of UQ and GSA (<em>i.e.</em> without taking into account the dependencies between inputs) and the ones coming from the new methods based on copulas. The results show that the dependence between inputs should not always be neglected in UQ and GSA.</p>


Water ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 1339 ◽  
Author(s):  
Mun-Ju Shin ◽  
Yun Choi

The hydrological model assessment and development (hydromad) modeling package is an R-based package that can be applied to simulate hydrological models and optimize parameters. As the hydromad package is incompatible with hydrological models outside the package, the parameters of such models cannot be directly optimized. Hence, we proposed a method of optimizing the hydrological-model parameters by combining the executable (EXE) file of the hydrological model with the shuffled complex evolution (SCE) algorithm provided by the hydromad package. A physically based, spatially distributed, grid-based rainfall–runoff model (GRM) was employed. By calibrating the parameters of the GRM, the performance of the model was found to be reasonable. Thus, the hydromad can be used to optimize the hydrological-model parameters outside the package if the EXE file of the hydrological model is available. The time required to optimize the parameters depends on the type of event, even for the same catchment area.


2020 ◽  
Author(s):  
Chia-Te Chien ◽  
Markus Pahlow ◽  
Markus Schartau ◽  
Andreas Oschlies

Abstract. We analyse 400 perturbed-parameter simulations for two configurations of an optimality-based plankton-ecosystem model (OPEM), implemented in the University of Victoria Earth-System Climate Model (UVic-ESCM), using a Latin-Hypercube sampling method for setting up the parameter ensemble. A likelihood-based metric is introduced for model assessment and selection of the model solutions closest to observed distributions of NO3−, PO43−, O2, and surface chlorophyll a concentrations. According to our metric the optimal model solutions comprise low rates of global N2 fixation and denitrification. These two rate estimates turned out to be poorly constrained by the data. For identifying the “best” model solutions we therefore also consider the model’s ability to represent current estimates of water-column denitrification. We employ our ensemble of model solutions in a sensitivity analysis to gain insights into the importance and role of individual model parameters as well as correlations between various biogeochemical processes and tracers, such as POC export and the NO3− inventory. Global O2 varies by a factor of two and NO3− by more than a factor of six among all simulations. Remineralisation rate is the most important parameter for O2, which is also affected by the subsistence N quota of ordinary phytoplankton (QN0,phy) and zooplankton maximum specific ingestion rate. QN0,phy is revealed as a major determinant of the oceanic NO3− pool. This indicates that unraveling the driving forces of variations in phytoplankton physiology and elemental stoichiometry, which are tightly linked via QN0,phy, is a prerequisite for understanding the marine nitrogen inventory.


1994 ◽  
Vol 25 (1-2) ◽  
pp. 1-24 ◽  
Author(s):  
R. Kirnbauer ◽  
G. Blöschl ◽  
D. Gutknecht

Traditionally, snowmelt modelling has been governed by the operational need for runoff forecasts. Parsimony in terms of model complexity and data requirements was a major concern. More recently, the increased importance of analyzing environmental problems and extreme conditions has motivated the development of distributed snow models. Unfortunately, the use of this type of models is limited by a number of factors including a) the extreme heterogeneity of the hydrologic environment, b) the mismatch of scales between observed variables and model state variables, c) the large number of model parameters, and d) the observability/testability problem. This paper discusses the implications of these constraints on the use of site and catchment scale concepts, regionalisation techniques, and calibration methods. In particular, the point is made that in many cases model parameters are poorly defined or not unique when being optimized on the basis of runoff data. Snow cover depletion patterns are shown to be vastly superior to runoff data for discriminating between alternative model assumptions. The patterns are capable of addressing individual model components representing snow deposition and albedo while the respective parameters are highly intercorrelated in terms of catchment runoff. The paper concludes that site scale models of snow cover processes are fairly advanced but much is left to be done at the catchment scale. Specifically, more emphasis needs to be directed towards measuring and representing spatial variability in catchments as well as on spatially distributed model evaluation.


Author(s):  
B. Arheimer ◽  
G. Lindström

Abstract. The most radical anthropogenic impact on water systems in Sweden originates from the years 1900–1970, when the electricity network was developed in the country and almost all rivers were regulated. The construction of dams and changes in water flow caused problems for ecosystems. Therefore, when implementing the EU Water Framework Directive (WFD) hydro-morphological indicators and targets were developed for rivers and lakes to achieve good ecological potential. The hydrological regime is one such indicator. To understand the change in flow regime we quantified the hydropower impact on river flow across Sweden by using the S-HYPE model and observations. The results show that the average redistribution of water during a year due to regulation is 19 % for the total discharge from Sweden. A distinct impact was found in seasonal flow patterns and flow duration curves. Moreover, we quantified the model skills in predicting hydropower impact on flow. The median NSE for simulating change in flow regime was 0.71 for eight dams studied. Results from the spatially distributed model are available for 37 000 sub-basins across the country, and will be used by the Swedish water authorities for reporting hydro-morphological indicators to the EU and for guiding the allocation of river restoration measures.


2020 ◽  
Author(s):  
Gerardo Zegers ◽  
Pablo A. Mendoza ◽  
Alex Garces ◽  
Santiago Montserrat

Abstract. Over the past decades, several numerical models have been developed to understand, simulate and predict debris flow events. Typically, these models simplify the complex interactions between water and solids using a single-phase approach and different rheological models to represent flow resistance. In this study, we perform a sensitivity analysis on the parameters of a debris flow numerical model (FLO-2D) for a suite of relevant variables (i.e., maximum flood area, maximum flow velocity, maximum flow velocity, deposit volume). Our aims are to (i) examine the degree of model overparameterization, and (ii) assess the effectiveness of observational constraints to improve parameter identifiability. We use the Distributed Evaluation of Local Sensitivity Analysis (DELSA) method, which is a hybrid local-global technique. Specifically, we analyze two creeks in northern Chile that were affected by debris flows on March 25, 2015. Our results show that SD and β1 – a parameter related to viscosity – provide the largest sensitivities. Further, our results demonstrate that equifinality is present in FLO-2D, and that final deposited volume and maximum flood area contain considerable information to identify model parameters.


2013 ◽  
Vol 17 (12) ◽  
pp. 5109-5125 ◽  
Author(s):  
J. D. Herman ◽  
J. B. Kollat ◽  
P. M. Reed ◽  
T. Wagener

Abstract. Distributed watershed models are now widely used in practice to simulate runoff responses at high spatial and temporal resolutions. Counter to this purpose, diagnostic analyses of distributed models currently aggregate performance measures in space and/or time and are thus disconnected from the models' operational and scientific goals. To address this disconnect, this study contributes a novel approach for computing and visualizing time-varying global sensitivity indices for spatially distributed model parameters. The high-resolution model diagnostics employ the method of Morris to identify evolving patterns in dominant model processes at sub-daily timescales over a six-month period. The method is demonstrated on the United States National Weather Service's Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) in the Blue River watershed, Oklahoma, USA. Three hydrologic events are selected from within the six-month period to investigate the patterns in spatiotemporal sensitivities that emerge as a function of forcing patterns as well as wet-to-dry transitions. Events with similar magnitudes and durations exhibit significantly different performance controls in space and time, indicating that the diagnostic inferences drawn from representative events will be heavily biased by the a priori selection of those events. By contrast, this study demonstrates high-resolution time-varying sensitivity analysis, requiring no assumptions regarding representative events and allowing modelers to identify transitions between sets of dominant parameters or processes a posteriori. The proposed approach details the dynamics of parameter sensitivity in nearly continuous time, providing critical diagnostic insights into the underlying model processes driving predictions. Furthermore, the approach offers the potential to identify transition points between dominant parameters and processes in the absence of observations, such as under nonstationarity.


2009 ◽  
Vol 13 (4) ◽  
pp. 503-517 ◽  
Author(s):  
W. Castaings ◽  
D. Dartus ◽  
F.-X. Le Dimet ◽  
G.-M. Saulnier

Abstract. Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.


Sign in / Sign up

Export Citation Format

Share Document