Modelling design flood hydrographs in catchments with mixed urban and rural land cover

2013 ◽  
Vol 44 (6) ◽  
pp. 1040-1057 ◽  
Author(s):  
T. R. Kjeldsen ◽  
J. D. Miller ◽  
J. C. Packman

The effect of urban land cover on catchment flood response is evaluated using a lumped rainfall–runoff model to analyse flood events from selected UK catchments with mixed urban and rural land use. The present study proposes and evaluates a series of three extensions to an existing model to enable a better representation of urban effects, namely: an increase in runoff volume, reduced response time and a decrease in baseflow (resulting from decreased infiltration). Based on observed flood events from seven catchments, cross-validation methods are used to compare the predictive ability of the model variants with that of the original unmodified model. The results show that inclusion of urban effects increases the predictive ability of the model across catchments, despite large between-event variability of model performance. More detailed investigations into the relationship between model performance and individual event characteristics (antecedent soil moisture, rainfall duration, depth and intensity) did not reveal systematic inabilities of the model to reproduce certain types of events. Finally, it is demonstrated that the new extended model has the ability to simulate urban effects in accordance with the expected changes in storm runoff patterns.

2021 ◽  
Vol 21 (2) ◽  
pp. 559-575
Author(s):  
Oliver E. J. Wing ◽  
Andrew M. Smith ◽  
Michael L. Marston ◽  
Jeremy R. Porter ◽  
Mike F. Amodeo ◽  
...  

Abstract. Continental–global-scale flood hazard models simulate design floods, i.e. theoretical flood events of a given probability. Since they output phenomena unobservable in reality, large-scale models are typically compared to more localised engineering models to evidence their accuracy. However, both types of model may share the same biases and so not validly illustrate their predictive skill. Here, we adapt an existing continental-scale design flood framework of the contiguous US to simulate historical flood events. A total of 35 discrete events are modelled and compared to observations of flood extent, water level, and inundated buildings. Model performance was highly variable, depending on the flood event chosen and validation data used. While all events were accurately replicated in terms of flood extent, some modelled water levels deviated substantially from those measured in the field. Despite this, the model generally replicated the observed flood events in the context of terrain data vertical accuracy, extreme discharge measurement uncertainties, and observational field data errors. This analysis highlights the continually improving fidelity of large-scale flood hazard models, yet also evidences the need for considerable advances in the accuracy of routinely collected field and high-river flow data in order to interrogate flood inundation models more comprehensively.


2020 ◽  
Author(s):  
Oliver E. J. Wing ◽  
Andrew M. Smith ◽  
Michael L. Marston ◽  
Jeremy R. Porter ◽  
Mike F. Amodeo ◽  
...  

Abstract. Continental–global scale flood hazard models simulate design floods: theoretical flood events of a given probability. Since they output phenomena unobservable in reality, large-scale models are typically compared to more localised engineering models to evidence their accuracy. However, both types of model may share the same biases and so not validly illustrate predictive skill. Here, we adapt an existing continental-scale design flood framework of the contiguous US to simulate historical flood events. 35 discrete events are modelled and compared to observations of flood extent, water level, and inundated buildings. Model performance was highly variable depending on the flood event chosen and validation data used. While all events were accurately replicated in terms of flood extent, some modelled water levels deviated substantially from those measured in the field. In spite of this, the model generally replicated the observed flood events in the context of terrain data vertical accuracy, extreme discharge measurement uncertainties, and observational field data errors. This analysis highlights the continually improving fidelity of large-scale flood hazard models, yet also evidences the need for considerable advances in the accuracy of routinely collected field and high river flow data in order to interrogate flood inundation models more comprehensively.


2021 ◽  
Author(s):  
Paul C. Astagneau ◽  
François Bourgin ◽  
Vazken Andréassian ◽  
Charles Perrin

<p>To improve the predictive capability of a model, one must identify situations where it fails to provide satisfactory results. We tried to identify the deficiencies of a lumped rainfall-runoff model used for flood simulation (the hourly GR5H-I model) by evaluating it over a large set of 229 French catchments and 11,054 flood events. Evaluating model simulations separately for individual flood events allowed us identifying a seasonal trend: while the model yielded good performance in terms of aggregated statistics, grouping results by season showed clear underestimations of most of the floods occurring in summer. The largest underestimations of flood volumes were identified when high-intensity precipitation events occurred and when the precipitation field was highly spatially variable. Low antecedent soil moisture conditions were also found to be strongly correlated with model bias. Overall, this study pinpoints the need to better account for short-duration processes to improve the GR5H-I model for flood simulation.</p>


2017 ◽  
Vol 21 (2) ◽  
pp. 879-896 ◽  
Author(s):  
Tirthankar Roy ◽  
Hoshin V. Gupta ◽  
Aleix Serrat-Capdevila ◽  
Juan B. Valdes

Abstract. Daily, quasi-global (50° N–S and 180° W–E), satellite-based estimates of actual evapotranspiration at 0.25° spatial resolution have recently become available, generated by the Global Land Evaporation Amsterdam Model (GLEAM). We investigate the use of these data to improve the performance of a simple lumped catchment-scale hydrologic model driven by satellite-based precipitation estimates to generate streamflow simulations for a poorly gauged basin in Africa. In one approach, we use GLEAM to constrain the evapotranspiration estimates generated by the model, thereby modifying daily water balance and improving model performance. In an alternative approach, we instead change the structure of the model to improve its ability to simulate actual evapotranspiration (as estimated by GLEAM). Finally, we test whether the GLEAM product is able to further improve the performance of the structurally modified model. Results indicate that while both approaches can provide improved simulations of streamflow, the second approach also improves the simulation of actual evapotranspiration significantly, which substantiates the importance of making diagnostic structural improvements to hydrologic models whenever possible.


Author(s):  
Zhi Li ◽  
Mengye Chen ◽  
Shang Gao ◽  
Berry Wen ◽  
Jonathan Gourley ◽  
...  

Coupled Hydrologic & Hydraulic (H&H) models have been widely applied to simulate both discharge and flood inundation due to their complementary advantages, yet the H&H models oftentimes suffer from one-way and weak coupling and particularly disregarded run-on infiltration or re-infiltration. This could compromise the model accuracy, such as under-prediction (over-prediction) of subsurface water contents (surface runoff). In this study, we examine the H&H model performance differences between the scenarios with and without re-infiltration process in extreme events¬ – 100-year design rainfall and 500-year Hurricane Harvey event – from the perspective of flood depth, inundation extent, and timing. Results from both events underline that re-infiltration manifests discernable impacts and non-negligible differences for better predicting flood depth and extents, flood wave timings, and inundation durations. Saturated hydraulic conductivity and antecedent soil moisture are found to be the prime contributors to such differences. For the Hurricane Harvey event, the model performance is verified against stream gauges and high water marks, from which the re-infiltration scheme increases the Nash Sutcliffe Efficiency score by 140% on average and reduces maximum depth differences by 17%. This study highlights that the re-infiltration process should not be disregarded even in extreme flood simulations. Meanwhile, the new version of the H&H model – the Coupled Routing and Excess STorage inundation MApping and Prediction (CREST-iMAP) Version 1.1, which incorporates such two-way coupling and re-infiltration scheme, is released for public access.


2021 ◽  
Author(s):  
Marina Z. Joel ◽  
Sachin Umrao ◽  
Enoch Chang ◽  
Rachel Choi ◽  
Daniel Yang ◽  
...  

AbstractBackgroundDeep learning (DL) models have shown promise to automate the classification of medical images used for cancer detection. Unfortunately, recent studies have found that DL models are vulnerable to adversarial attacks, which manipulate images with small pixel-level perturbations designed to cause models to misclassify images. There is a need for better understanding of how adversarial attacks impact the predictive ability of DL models in the medical image domain.MethodsWe examined adversarial attacks on DL classification models separately trained on three medical imaging modalities commonly used in oncology: computed tomography (CT), mammography, and magnetic resonance imaging (MRI). We investigated how iterative adversarial training could be employed to increase model robustness against three first-order attack methods.ResultsOn unmodified images, we achieved classification accuracies of 75.4% for CT, 76.4% accuracy for mammogram, and 93.6% for MRI. Under adversarial attack, model accuracy showed a maximum absolute decrease of 49.8% for CT, 52.9% for mammogram, 87.3% for MRI. Adversarial training caused model accuracy on adversarial images to increase by up to 42.9% for CT, 35.7% for mammogram, and 73.2% for MRI.ConclusionOur results indicated that DL models for oncologic images are highly sensitive to adversarial attacks, as visually imperceptible degrees of perturbation are sufficient to deceive the model the majority of the time. Adversarial training mitigated the effect of adversarial attacks on model performance but was less successful against stronger attacks. Our findings provide a useful basis for designing more robust and accurate medical DL models as well as techniques to defend models from adversarial attack.


PLoS ONE ◽  
2016 ◽  
Vol 11 (11) ◽  
pp. e0166206 ◽  
Author(s):  
Tianshu Han ◽  
Shuang Tian ◽  
Li Wang ◽  
Xi Liang ◽  
Hongli Cui ◽  
...  

Stroke ◽  
2015 ◽  
Vol 46 (suppl_1) ◽  
Author(s):  
Blessing Jaja ◽  
Hester Lingsma ◽  
Ewout Steyerberg ◽  
R. Loch Macdonald ◽  

Background: Aneurysmal subarachnoid hemorrhage (SAH) is a cerebrovascular emergency. Currently, clinicians have limited tools to estimate outcomes early after hospitalization. We aimed to develop novel prognostic scores using large cohorts of patients reflecting experience from different settings. Methods: Logistic regression analysis was used to develop prediction models for mortality and unfavorable outcomes according to 3-month Glasgow outcome score after SAH based on readily obtained parameters at hospital admission. The development cohort was derived from 10 prospective studies involving 10936 patients in the Subarachnoid Hemorrhage International Trialists (SAHIT) repository. Model performance was assessed by bootstrap internal validation and by cross validation by omission of each of the 10 studies, using R2 statistic, Area under the receiver operating characteristics curve (AUC), and calibration plots. Prognostic scores were developed from the regression coefficients. Results: Predictor variable with the strongest prognostic strength was neurologic status (partial R2 = 12.03%), followed by age (1.91%), treatment modality (1.25%), Fisher grade of CT clot burden (0.65%), history of hypertension (0.37%), aneurysm size (0.12%) and aneurysm location (0.06%). These predictors were combined to develop 3 sets of hierarchical scores based on the coefficients of the regression models. The AUC at bootstrap validation was 0.79-0.80, and at cross validation was 0.64-0.85. Calibration plots demonstrated satisfactory agreement between predicted and observed probabilities of the outcomes. Conclusions: The novel prognostic scores have good predictive ability and potential for broad application as they have been developed from prospective cohorts reflecting experience from different centers globally.


2021 ◽  
Author(s):  
Simon Vale ◽  
Andrew Swales ◽  
Hugh Smith ◽  
Greg Olsen ◽  
Ben Woodward

<p>Sediment fingerprinting is a technique for determining the proportional contributions of sediment from erosion sources delivered to downstream locations. It involves selecting tracers that discriminate sediment sources and determining contributions from those sources using tracers.  These tracers can include geochemical, fallout radionuclides, magnetic properties, and compound specific stable isotope (CSSI) values of plant-derived biotracers that label of soils and sediment.  A range of tracer applications and developments in source un-mixing have been demonstrated in the literature and, while the basis for discriminating sediment sources is reasonably well understood, research has drawn increasing attention to limitations and uncertainties associated with source apportionment. Numerical mixtures provide a way to test model performance using idealized mixtures with known source proportions. Although this approach has been applied previously, it has not been used to test and compare model performance across a range of tracer types with varied source contribution dominance and number of sources.</p><p>We used numerical mixtures to examine the ability of two different tracer sets (geochemical and CSSI), each with two tracer selections, to discriminate sources using a common source dataset. Sources were sampled according to erosion process and land cover in the Aroaro catchment (22 km<sup>2</sup>), New Zealand.  Here we sampled top-soils and sub-soils from pasture (n = 12 sites), harvested pine (12), kanuka scrub (7) and native forest (4) locations. Composite soil samples were collected at 0-2 and 40-50 cm depth increments to represent surface and shallow landslide (subsoil) erosion sources. Stream sediment (11) samples were also collected for initial unmixing.  Here, we focus on using numerical mixtures with geochemical and CSSI tracers for an increasing number of sources (3 to 6) where each individual and pairwise combination of sources were systematically set as the dominant source.  Since mixing models for CSSI tracers produce source contributions based on isotopic proportions (Isotopic%) instead of soil contributions (Soil%), CSSI numerical mixtures were created for Isotopic% and Soil% to assess the impact this correction factor may have on model performance.  In total, over 400 model scenarios were tested.</p><p>Numerical mixture testing indicated that the dominant source can have a significant impact on model performance.  If the dominant source is well discriminated, then the model performs well but accuracy declines significantly as discrimination of the dominant source reduces. This occurs more frequently with an increasing number of sources. The geochemical dataset performed well for erosion-based sources while both tracer sets produced larger apportionment errors for land cover sources. CSSI model performance was generally poorer for Soil% than Isotopic%, indicating high sensitivity to the percent soil organic carbon in each source, especially when there are large differences in organic matter between sources.</p><p> </p>


2020 ◽  
Vol 9 (7) ◽  
pp. 458 ◽  
Author(s):  
Rafael M. Navarro Cerrillo ◽  
Guillermo Palacios Rodríguez ◽  
Inmaculada Clavero Rumbao ◽  
Miguel Ángel Lara ◽  
Francisco Javier Bonet ◽  
...  

The effective and efficient planning of rural land-use changes and their impact on the environment is critical for land-use managers. Many land-use growth models have been proposed for forecasting growth patterns in the last few years. In this work; a cellular automata (CA)-based land-use model (Metronamica) was tested to simulate (1999–2007) and predict (2007–2035) land-use dynamics and land-use changes in Andalucía (Spain). The model was calibrated using temporal changes in land-use covers and was evaluated by the Kappa index. GIS-based maps were generated to study major rural land-use changes (agriculture and forests). The change matrix for 1999–2007 showed an overall area change of 674971 ha. The dominant land uses in 2007 were shrubs (30.7%), woody crops on dry land (17.3%), and herbaceous crops on dry land (12.7%). The comparison between the reference and the simulated land-use maps of 2007 showed a Kappa index of 0.91. The land-cover map for the projected PRELUDE scenarios provided the land-cover characteristics of 2035 in Andalusia; developed within the Metronamica model scenarios (Great Escape; Evolved Society; Clustered Network; Lettuce Surprise U; and Big Crisis). The greatest differences were found between Great Escape and Clustered Network and Lettuce Surprise U. The observed trend (1999–2007–2035) showed the greatest similarity with the Big Crisis scenario. Land-use projections facilitate the understanding of the future dynamics of land-use change in rural areas; and hence the development of more appropriate plans and policies


Sign in / Sign up

Export Citation Format

Share Document