scholarly journals A Comparison of Continuous and Event-Based Rainfall–Runoff (RR) Modelling Using EPA-SWMM

Water ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 611 ◽  
Author(s):  
Sharif Hossain ◽  
Guna Alankarage Hewa ◽  
Subhashini Wella-Hewage

This study investigates the comparative performance of event-based and continuous simulation modelling of a stormwater management model (EPA-SWMM) in calculating total runoff hydrographs and direct runoff hydrographs. Myponga upstream and Scott Creek catchments in South Australia were selected as the case study catchments and model performance was assessed using a total of 36 streamflow events from the period of 2001 to 2004. Goodness-of-fit of the EPA-SWMM models developed using automatic calibration were assessed using eight goodness-of-fit measures including Nash–Sutcliff efficiency (NSE), NSE of daily high flows (ANSE), Kling–Gupta efficiency (KGE), etc. The results of this study suggest that event-based modelling of EPA-SWMM outperforms the continuous simulation approach in producing both total runoff hydrograph (TRH) and direct runoff hydrograph (DRH).

2020 ◽  
Vol 15 (4) ◽  
pp. 351-361
Author(s):  
Liwei Huang ◽  
Arkady Shemyakin

Skewed t-copulas recently became popular as a modeling tool of non-linear dependence in statistics. In this paper we consider three different versions of skewed t-copulas introduced by Demarta and McNeill; Smith, Gan and Kohn; and Azzalini and Capitanio. Each of these versions represents a generalization of the symmetric t-copula model, allowing for a different treatment of lower and upper tails. Each of them has certain advantages in mathematical construction, inferential tools and interpretability. Our objective is to apply models based on different types of skewed t-copulas to the same financial and insurance applications. We consider comovements of stock index returns and times-to-failure of related vehicle parts under the warranty period. In both cases the treatment of both lower and upper tails of the joint distributions is of a special importance. Skewed t-copula model performance is compared to the benchmark cases of Gaussian and symmetric Student t-copulas. Instruments of comparison include information criteria, goodness-of-fit and tail dependence. A special attention is paid to methods of estimation of copula parameters. Some technical problems with the implementation of maximum likelihood method and the method of moments suggest the use of Bayesian estimation. We discuss the accuracy and computational efficiency of Bayesian estimation versus MLE. Metropolis-Hastings algorithm with block updates was suggested to deal with the problem of intractability of conditionals.


2020 ◽  
Vol 41 (S1) ◽  
pp. s521-s522
Author(s):  
Debarka Sengupta ◽  
Vaibhav Singh ◽  
Seema Singh ◽  
Dinesh Tewari ◽  
Mudit Kapoor ◽  
...  

Background: The rising trend of antibiotic resistance imposes a heavy burden on healthcare both clinically and economically (US$55 billion), with 23,000 estimated annual deaths in the United States as well as increased length of stay and morbidity. Machine-learning–based methods have, of late, been used for leveraging patient’s clinical history and demographic information to predict antimicrobial resistance. We developed a machine-learning model ensemble that maximizes the accuracy of such a drug-sensitivity versus resistivity classification system compared to the existing best-practice methods. Methods: We first performed a comprehensive analysis of the association between infecting bacterial species and patient factors, including patient demographics, comorbidities, and certain healthcare-specific features. We leveraged the predictable nature of these complex associations to infer patient-specific antibiotic sensitivities. Various base-learners, including k-NN (k-nearest neighbors) and gradient boosting machine (GBM), were used to train an ensemble model for confident prediction of antimicrobial susceptibilities. Base learner selection and model performance evaluation was performed carefully using a variety of standard metrics, namely accuracy, precision, recall, F1 score, and Cohen κ. Results: For validating the performance on MIMIC-III database harboring deidentified clinical data of 53,423 distinct patient admissions between 2001 and 2012, in the intensive care units (ICUs) of the Beth Israel Deaconess Medical Center in Boston, Massachusetts. From ~11,000 positive cultures, we used 4 major specimen types namely urine, sputum, blood, and pus swab for evaluation of the model performance. Figure 1 shows the receiver operating characteristic (ROC) curves obtained for bloodstream infection cases upon model building and prediction on 70:30 split of the data. We received area under the curve (AUC) values of 0.88, 0.92, 0.92, and 0.94 for urine, sputum, blood, and pus swab samples, respectively. Figure 2 shows the comparative performance of our proposed method as well as some off-the-shelf classification algorithms. Conclusions: Highly accurate, patient-specific predictive antibiogram (PSPA) data can aid clinicians significantly in antibiotic recommendation in ICU, thereby accelerating patient recovery and curbing antimicrobial resistance.Funding: This study was supported by Circle of Life Healthcare Pvt. Ltd.Disclosures: None


Circulation ◽  
2018 ◽  
Vol 138 (Suppl_2) ◽  
Author(s):  
Anne V Grossestreuer ◽  
Tuyen Yankama ◽  
Ari Moskowitz ◽  
Anthony Mahoney-Pacheco ◽  
Varun Konanki ◽  
...  

Introduction: Cardiac arrest (CA) outcomes, when dichotomized as survival/non-survival, limit statistical power of interventional studies and do not acknowledge hospital-level factors independent of post-CA sequelae. We explored the Sequential Organ Failure Assessment (SOFA) score at 72 hours post-CA as a surrogate outcome measure for mortality. We also assessed methods to account for death <72 hours post-CA in SOFA score computation. Methods: This was a single center retrospective study of post-CA patients from 1/08-12/17. SOFA score components were abstracted at baseline, 24, 48, and 72h post-CA. Thirteen ways of accounting for missing data were assessed. The outcome was mortality at hospital discharge. Model performance was assessed using area under the receiver-operator characteristic (AUC) curves and Hosmer-Lemeshow goodness of fit statistics. Results: Of 847 patients, 528 (62%) had complete baseline SOFA scores and 205 (24%) had complete scores at 72h. Death <72h occurred in 28%; 45% survived to hospital discharge. SOFA score at 72h without accounting for death had an AUC of 0.62. The best performing SOFA model at 72h with good calibration imputed a 20% increase over the last observed SOFA score in patients who expired <72h with an AUC of 0.79 (95% CI: 0.74-0.83). In terms of change in SOFA at 72h from baseline, the best performing model with good calibration imputed death <72h as the highest possible score (AUC: 0.88 [95% CI: 0.84-0.92]). These results were consistent when analyzing in- and out-of-hospital CA separately, although the change from baseline model was not well calibrated in in-hospital arrests. Conclusions: Without consideration of death, SOFA scores at 72 hours post-CA perform poorly. Imputing for early mortality improved the model. If this imputation structure is validated prospectively, SOFA could provide a scoring system to predict death at hospital discharge and serve as a surrogate outcome measure in interventional studies.


2016 ◽  
Vol 2016 ◽  
pp. 1-28 ◽  
Author(s):  
Charles Onyutha

Five hydrological models were applied based on data from the Blue Nile Basin. Optimal parameters of each model were obtained by automatic calibration. Model performance was tested under both moderate and extreme flow conditions. Extreme events for the model performance evaluation were extracted based on seven criteria. Apart from graphical techniques, there were nine statistical “goodness-of-fit” metrics used to judge the model performance. It was found that whereas the influence of model selection may be minimal in the simulation of normal flow events, it can lead to large under- and/or overestimations of extreme events. Besides, the selection of the best model for extreme events may be influenced by the choice of the statistical “goodness-of-fit” measures as well as the criteria for extraction of high and low flows. It was noted that the use of overall water-balance-based objective function not only is suitable for moderate flow conditions but also influences the models to perform better for high flows than low flows. Thus, the choice of a particular model is recommended to be made on a case by case basis with respect to the objectives of the modeling as well as the results from evaluation of the intermodel differences.


2017 ◽  
Vol 21 (7) ◽  
pp. 3325-3352 ◽  
Author(s):  
Christa Kelleher ◽  
Brian McGlynn ◽  
Thorsten Wagener

Abstract. Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology–soil–vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.


Forests ◽  
2019 ◽  
Vol 10 (7) ◽  
pp. 605
Author(s):  
Peter F. Newton

The objective of this study was to specify, parameterize, and evaluate an acoustic-based inferential framework for estimating commercially-relevant wood attributes within standing jack pine (Pinus banksiana Lamb) trees. The analytical framework consisted of a suite of models for predicting the dynamic modulus of elasticity (me), microfibril angle (ma), oven-dried wood density (wd), tracheid wall thickness (wt), radial and tangential tracheid diameters (dr and dt, respectively), fibre coarseness (co), and specific surface area (sa), from dilatational stress wave velocity (vd). Data acquisition consisted of (1) in-forest collection of acoustic velocity measurements on 61 sample trees situated within 10 variable-sized plots that were established in four mature jack pine stands situated in boreal Canada followed by the removal of breast-height cross-sectional disk samples, and (2) given (1), in-laboratory extraction of radial-based transverse xylem samples from the 61 disks and subsequent attribute determination via Silviscan-3. Statistically, attribute-specific acoustic prediction models were specified, parameterized, and, subsequently, evaluated on their goodness-of-fit, lack-of-fit, and predictive ability. The results indicated that significant (p ≤ 0.05) and unbiased relationships could be established for all attributes but dt. The models explained 71%, 66%, 61%, 42%, 30%, 19%, and 13% of the variation in me, wt, sa, co, wd, ma, and dr, respectively. Simulated model performance when deploying an acoustic-based wood density estimate indicated that the expected magnitude of the error arising from predicting dt, co, sa, wt, me, and ma prediction would be in the order of ±8%, ±12%, ±12%, ±13%, ±20%, and ±39% of their true values, respectively. Assessment of the utility of predicting the prerequisite wd estimate using micro-drill resistance measures revealed that the amplitude-based wd estimate was inconsequentially more precise than that obtained from vd (≈ <2%). A discourse regarding the potential utility and limitations of the acoustic-based computational suite for forecasting jack pine end-product potential was also articulated.


2001 ◽  
Vol 5 (4) ◽  
pp. 554-562 ◽  
Author(s):  
R. Ragab ◽  
D. Moidinis ◽  
J. Albergel ◽  
J. Khouri ◽  
A. Drubi ◽  
...  

Abstract. The objective of this work was to assess the performance of the newly developed HYDROMED model. Three catchments with hill reservoirs were selected. They are El-Gouazine and Kamech in Tunisia and Es Sindiany in Syria. The rainfall, the spillway flow and volume of water in the reservoirs were used as input to the model. Events that generated spillway flow were preferred for calibration. The results confirmed that the HYDROMED model is capable of reproducing the runoff volume at all the three sites. In calibrating single events, the model performance was high as measured by the Nash-Sutcliffe criterion for goodness of fit. In some events this value was as high as 98%. In simulation mode, the highest Nash-Sutcliffe criterion value was close to 70% in the El-Gouazine and Kamech catchments and close to 50% in the Es Sindiany catchment. Given the limited information available, especially on the unrecorded releases in the three catchments, the hydrological impact of site geology (e.g. Kamech), the unrecorded operator intervention during the spillway flow (e.g. Es Sindiany) and other unaccounted factors (e.g siltation, evaporation, etc.), these results are by and large very encouraging. However, they could be further improved as and when more information on the unrecorded parameters becomes available. Additionally, the results of this work highlighted the need for long term records with a large number of significant events that are able to generate spillway flow to obtain more consistent and reliable parameter values. It also highlights the need for more accurately recorded releases for irrigation and other uses. As these results are encouraging, more tests on those three and other sites are planned. Keywords: HYDROMED, rainfall-runoff model, Mediterranean, conceptual model


1977 ◽  
Vol 9 (9) ◽  
pp. 1067-1079 ◽  
Author(s):  
S Openshaw ◽  
C J Connolly

The relationship between the choice of deterrence function and the goodness of fit of a singly constrained spatial interaction model is examined as a basis for improving model performance. The results show that there is no significant improvement in model goodness of fit until a deterrence-function characterisation is used which is based on a family of functions, with the spatial domain of each function being determined in an approximately optimal manner. These findings are consistent with theoretical research on microlevel trip behaviour and can be used to identify descriptive models which possess maximum levels of performance.


Water ◽  
2021 ◽  
Vol 13 (14) ◽  
pp. 1931
Author(s):  
Alvaro Sordo-Ward ◽  
Ivan Gabriel-Martín ◽  
Paola Bianucci ◽  
Giuseppe Mascaro ◽  
Enrique R. Vivoni ◽  
...  

This study proposes a methodology that combines the advantages of the event-based and continuous models, for the derivation of the maximum flow and maximum hydrograph volume frequency curves, by combining a stochastic continuous weather generator (the advanced weather generator, abbreviated as AWE-GEN) with a fully distributed physically based hydrological model (the TIN-based real-time integrated basin simulator, abbreviated as tRIBS) that runs both event-based and continuous simulation. The methodology is applied to Peacheater Creek, a 64 km2 basin located in Oklahoma, United States. First, a continuous set of 5000 years’ hourly weather forcing series is generated using the stochastic weather generator AWE-GEN. Second, a hydrological continuous simulation of 50 years of the climate series is generated with the hydrological model tRIBS. Simultaneously, the separation of storm events is performed by applying the exponential method to the 5000- and 50-years climate series. From the continuous simulation of 50 years, the mean soil moisture in the top 10 cm (MSM10) of the soil layer of the basin at an hourly time step is extracted. Afterwards, from the times series of hourly MSM10, the values associated to all the storm events within the 50 years of hourly weather series are extracted. Therefore, each storm event has an initial soil moisture value associated (MSM10Event). Thus, the probability distribution of MSM10Event for each month of the year is obtained. Third, the five major events of each of the 5000 years in terms of total depth are simulated in an event-based framework in tRIBS, assigning an initial moisture state value for the basin using a Monte Carlo framework. Finally, the maximum annual hydrographs are obtained in terms of maximum peak-flow and volume, and the associated frequency curves are derived. To validate the method, the results obtained by the hybrid method are compared to those obtained by deriving the flood frequency curves from the continuous simulation of 5000 years, analyzing the maximum annual peak-flow and maximum annual volume, and the dependence between the peak-flow and volume. Independence between rainfall events and prior hydrological soil moisture conditions has been proved. The proposed hybrid method can reproduce the univariate flood frequency curves with a good agreement to those obtained by the continuous simulation. The maximum annual peak-flow frequency curve is obtained with a Nash–Sutcliffe coefficient of 0.98, whereas the maximum annual volume frequency curve is obtained with a Nash–Sutcliffe value of 0.97. The proposed hybrid method permits to generate hydrological forcing by using a fully distributed physically based model but reducing the computation times on the order from months to hours.


Sign in / Sign up

Export Citation Format

Share Document