scholarly journals Hydrological Modelling in Data Sparse Environment: Inverse Modelling of a Historical Flood Event

Water ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 3242
Author(s):  
András Bárdossy ◽  
Faizan Anwar ◽  
Jochen Seidel

We dealt with a rather frequent and difficult situation while modelling extreme floods, namely, model output uncertainty in data sparse regions. A historical extreme flood event was chosen to illustrate the challenges involved. Our aim was to understand what the causes might have been and specifically to show how input and model parameter uncertainties affect the output. For this purpose, a conceptual model was calibrated and validated with recent data rich time period. Resulting model parameters were used to model the historical event which subsequently resulted in a rather poor hydrograph. Due to the bad model performance, a spatial simulation technique was used to invert the model for precipitation. Constraints, such as taking the precipitation values at historical observation locations in to account, with correct spatial structures and following the observed regional distributions were used to generate realistic precipitation fields. Results showed that the inverted precipitation improved the performance significantly even when using many different model parameters. We conclude that while modelling in data sparse conditions both model input and parameter uncertainties have to be dealt with simultaneously to obtain meaningful results.

2017 ◽  
Author(s):  
Maurizio Mazzoleni ◽  
Vivian Juliette Cortes Arevalo ◽  
Uta Wehn ◽  
Leonardo Alfonso ◽  
Daniele Norbiato ◽  
...  

Abstract. Accurate flood predictions are essential to reduce the risk and damages over large urbanized areas. To improve prediction capabilities, hydrological measurements derived by traditional physical sensors are integrated in real-time within mathematic models. Recently, traditional sensors are complemented with low-cost social sensors. However, measurements derived by social sensors (i.e. crowdsourced observations) can be more spatially distributed but less accurate. In this study, we assess the usefulness for model performance of assimilating crowdsourced observations from a heterogeneous network of static physical, static social and dynamic social sensors. We assess potential effects on the model predictions to the extreme flood event occurred in the Bacchiglione catchment on May 2013. Flood predictions are estimated at the target point of Ponte degli Angeli (Vicenza), outlet of the Bacchiglione catchment, by means of a semi-distributed hydrological model. The contribution of the upstream sub-catchment is calculated using a conceptual hydrological model. The flow is propagated along the river reach using a hydraulic model. In both models, a Kalman filter is implemented to assimilate the real-time crowdsourced observations. We synthetically derived crowdsourced observations for either static social or dynamic social sensors because crowdsourced measures were not available. We consider three sets of experiments: (1) only physical sensors are available; (2) probability of receiving crowdsourced observations and (3) realistic scenario of citizen engagement based on population distribution. The results demonstrated the importance of integrating crowdsourced observations. Observations from upstream sub-catchments assimilated into the hydrological model ensures high model performance for high lead time values. Observations next to the outlet of the catchments provide good results for short lead times. Furthermore, citizen engagement level scenarios moved by a feeling of belonging to a community of friends indicated flood prediction improvements when such small communities are located upstream a particular target point. Effective communication and feedback is required between water authorities and citizens to ensure minimum engagement levels and to minimize the intrinsic low-variable accuracy of crowdsourced observations.


2020 ◽  
Vol 126 (4) ◽  
pp. 559-570 ◽  
Author(s):  
Ming Wang ◽  
Neil White ◽  
Jim Hanan ◽  
Di He ◽  
Enli Wang ◽  
...  

Abstract Background and Aims Functional–structural plant (FSP) models provide insights into the complex interactions between plant architecture and underlying developmental mechanisms. However, parameter estimation of FSP models remains challenging. We therefore used pattern-oriented modelling (POM) to test whether parameterization of FSP models can be made more efficient, systematic and powerful. With POM, a set of weak patterns is used to determine uncertain parameter values, instead of measuring them in experiments or observations, which often is infeasible. Methods We used an existing FSP model of avocado (Persea americana ‘Hass’) and tested whether POM parameterization would converge to an existing manual parameterization. The model was run for 10 000 parameter sets and model outputs were compared with verification patterns. Each verification pattern served as a filter for rejecting unrealistic parameter sets. The model was then validated by running it with the surviving parameter sets that passed all filters and then comparing their pooled model outputs with additional validation patterns that were not used for parameterization. Key Results POM calibration led to 22 surviving parameter sets. Within these sets, most individual parameters varied over a large range. One of the resulting sets was similar to the manually parameterized set. Using the entire suite of surviving parameter sets, the model successfully predicted all validation patterns. However, two of the surviving parameter sets could not make the model predict all validation patterns. Conclusions Our findings suggest strong interactions among model parameters and their corresponding processes, respectively. Using all surviving parameter sets takes these interactions into account fully, thereby improving model performance regarding validation and model output uncertainty. We conclude that POM calibration allows FSP models to be developed in a timely manner without having to rely on field or laboratory experiments, or on cumbersome manual parameterization. POM also increases the predictive power of FSP models.


2016 ◽  
Vol 16 (10) ◽  
pp. 2195-2210 ◽  
Author(s):  
Luis A. Bastidas ◽  
James Knighton ◽  
Shaun W. Kline

Abstract. Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.


2015 ◽  
Vol 3 (10) ◽  
pp. 6491-6534 ◽  
Author(s):  
L. A. Bastidas ◽  
J. Knighton ◽  
S. W. Kline

Abstract. Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.


2018 ◽  
Vol 22 (1) ◽  
pp. 391-416 ◽  
Author(s):  
Maurizio Mazzoleni ◽  
Vivian Juliette Cortes Arevalo ◽  
Uta Wehn ◽  
Leonardo Alfonso ◽  
Daniele Norbiato ◽  
...  

Abstract. To improve hydrological predictions, real-time measurements derived from traditional physical sensors are integrated within mathematic models. Recently, traditional sensors are being complemented with crowdsourced data (social sensors). Although measurements from social sensors can be low cost and more spatially distributed, other factors like spatial variability of citizen involvement, decreasing involvement over time, variable observations accuracy and feasibility for model assimilation play an important role in accurate flood predictions. Only a few studies have investigated the benefit of assimilating uncertain crowdsourced data in hydrological and hydraulic models. In this study, we investigate the usefulness of assimilating crowdsourced observations from a heterogeneous network of static physical, static social and dynamic social sensors. We assess improvements in the model prediction performance for different spatial–temporal scenarios of citizen involvement levels. To that end, we simulate an extreme flood event that occurred in the Bacchiglione catchment  (Italy) in May 2013 using a semi-distributed hydrological model with the station at Ponte degli Angeli (Vicenza) as the prediction–validation point. A conceptual hydrological model is implemented by the Alto Adriatico Water Authority and it is used to estimate runoff from the different sub-catchments, while a hydraulic model is implemented to propagate the flow along the river reach. In both models, a Kalman filter is implemented to assimilate the crowdsourced observations. Synthetic crowdsourced observations are generated for either static social or dynamic social sensors because these measures were not available at the time of the study. We consider two sets of experiments: (i) assuming random probability of receiving crowdsourced observations and (ii) using theoretical scenarios of citizen motivations, and consequent involvement levels, based on population distribution. The results demonstrate the usefulness of integrating crowdsourced observations. First, the assimilation of crowdsourced observations located at upstream points of the Bacchiglione catchment ensure high model performance for high lead-time values, whereas observations at the outlet of the catchments provide good results for short lead times. Second, biased and inaccurate crowdsourced observations can significantly affect model results. Third, the theoretical scenario of citizens motivated by their feeling of belonging to a community of friends has the best effect in the model performance. However, flood prediction only improved when such small communities are located in the upstream portion of the Bacchiglione catchment. Finally, decreasing involvement over time leads to a reduction in model performance and consequently inaccurate flood forecasts.


1996 ◽  
Vol 33 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Jian Hua Lei ◽  
Wolfgang Schilling

Physically-based urban rainfall-runoff models are mostly applied without parameter calibration. Given some preliminary estimates of the uncertainty of the model parameters the associated model output uncertainty can be calculated. Monte-Carlo simulation followed by multi-linear regression is used for this analysis. The calculated model output uncertainty can be compared to the uncertainty estimated by comparing model output and observed data. Based on this comparison systematic or spurious errors can be detected in the observation data, the validity of the model structure can be confirmed, and the most sensitive parameters can be identified. If the calculated model output uncertainty is unacceptably large the most sensitive parameters should be calibrated to reduce the uncertainty. Observation data for which systematic and/or spurious errors have been detected should be discarded from the calibration data. This procedure is referred to as preliminary uncertainty analysis; it is illustrated with an example. The HYSTEM program is applied to predict the runoff volume from an experimental catchment with a total area of 68 ha and an impervious area of 20 ha. Based on the preliminary uncertainty analysis, for 7 of 10 events the measured runoff volume is within the calculated uncertainty range, i.e. less than or equal to the calculated model predictive uncertainty. The remaining 3 events include most likely systematic or spurious errors in the observation data (either in the rainfall or the runoff measurements). These events are then discarded from further analysis. After calibrating the model the predictive uncertainty of the model is estimated.


Limnology ◽  
2021 ◽  
Vol 22 (2) ◽  
pp. 169-177
Author(s):  
Yo Miyake ◽  
Hiroto Makino ◽  
Kenta Fukusaki

2021 ◽  
Vol 13 (12) ◽  
pp. 2405
Author(s):  
Fengyang Long ◽  
Chengfa Gao ◽  
Yuxiang Yan ◽  
Jinling Wang

Precise modeling of weighted mean temperature (Tm) is critical for realizing real-time conversion from zenith wet delay (ZWD) to precipitation water vapor (PWV) in Global Navigation Satellite System (GNSS) meteorology applications. The empirical Tm models developed by neural network techniques have been proved to have better performances on the global scale; they also have fewer model parameters and are thus easy to operate. This paper aims to further deepen the research of Tm modeling with the neural network, and expand the application scope of Tm models and provide global users with more solutions for the real-time acquisition of Tm. An enhanced neural network Tm model (ENNTm) has been developed with the radiosonde data distributed globally. Compared with other empirical models, the ENNTm has some advanced features in both model design and model performance, Firstly, the data for modeling cover the whole troposphere rather than just near the Earth’s surface; secondly, the ensemble learning was employed to weaken the impact of sample disturbance on model performance and elaborate data preprocessing, including up-sampling and down-sampling, which was adopted to achieve better model performance on the global scale; furthermore, the ENNTm was designed to meet the requirements of three different application conditions by providing three sets of model parameters, i.e., Tm estimating without measured meteorological elements, Tm estimating with only measured temperature and Tm estimating with both measured temperature and water vapor pressure. The validation work is carried out by using the radiosonde data of global distribution, and results show that the ENNTm has better performance compared with other competing models from different perspectives under the same application conditions, the proposed model expanded the application scope of Tm estimation and provided the global users with more choices in the applications of real-time GNSS-PWV retrival.


2016 ◽  
Vol 15 (2) ◽  
pp. 196-208 ◽  
Author(s):  
Nicole M. Masters ◽  
Aaron Wiegand ◽  
Jasmin M. Thompson ◽  
Tara L. Vollmerhausen ◽  
Eva Hatje ◽  
...  

We investigated Escherichia coli populations in a metropolitan river after an extreme flood event. Between nine and 15 of the 23 selected sites along the river were sampled fortnightly over three rounds. In all, 307 E. coli were typed using the PhP typing method and were grouped into common (C) or single (S) biochemical phenotypes (BPTs). A representative from each of the 31 identified C-BPTs was tested for 58 virulence genes (VGs) associated with intestinal and extra-intestinal E. coli, resistance to 22 antibiotics, production of biofilm and cytotoxicity to Vero cells. The number of E. coli in the first sampling round was significantly (P < 0.01) higher than subsequent rounds, whereas the number of VGs was significantly (P < 0.05) higher in isolates from the last sampling round when compared to previous rounds. Comparison of the C-BPTs with an existing database from wastewater treatment plants (WWTPs) in the same catchment showed that 40.6% of the river isolates were identical to the WWTP isolates. The relatively high number of VGs and antibiotic resistance among the C-BPTs suggests possessing and retaining these genes may provide niche advantages for those naturalised and/or persistent E. coli populations which may pose a health risk to the community.


Sign in / Sign up

Export Citation Format

Share Document