Hydrodynamic Flood Modelling of Large Regions Under Data-Poor Situations

2021 ◽  
Vol 8 (2) ◽  
pp. 1-16
Author(s):  
Mohit Prakash Mohanty ◽  
Subhankar Karmakar

A serious constraint of data availability over flood-prone areas in India limits the potential of carrying out hydrodynamic flood modelling studies. Such difficulties are encountered because of a lack of high-resolution topography, cross-section data of the rivers, and sufficient and accurate calibration and validation data sets. The present study addresses the problems faced in performing a comprehensive 1D-2D coupled flood modelling over a flood disastrous region Jagatsinghpur, Odisha, India. The constraints faced in terms of hydraulic parameters such as water level, discharge, and geometric parameters such as river geometry are investigated. The simulations were performed for a severe flood event in 2011 that incurred heavy socio-economic losses in the district. The establishment of a modelling platform for flood simulation shall elucidate the major constraints faced in hydrodynamic modelling for such flood-prone areas with poor data availability.

2008 ◽  
Vol 26 (6) ◽  
pp. 877-883 ◽  
Author(s):  
Zhifu Sun ◽  
Dennis A. Wigle ◽  
Ping Yang

Purpose Gene expression profiling for outcome prediction of non–small-cell lung cancer (NSCLC) remains clouded by heterogeneous and unvalidated results. This study applied multivariate approaches to identify and evaluate value-added gene expression signatures in two types of NSCLC. Materials and Methods Two NSCLC oligonucleotide microarray data sets of adenocarcinoma and squamous cell carcinoma were used as training sets to select prognostic genes independent of conventional predictors. The top 50 genes from each set were used to predict the outcomes of two independent validation data sets of 84 and 91 NSCLC cases. Results Adenocarcinomas with the 50-gene signature from adenocarcinoma in both validation data sets had a 2.4-fold (95% CI, 1.3 to 4.4 and 1.0 to 5.8) increased mortality after adjustment for conventional predictors. Squamous cell carcinoma with this high-risk signature had an adjusted risk of 1.1 (95% CI, 0.4 to 3.2) in one data set and 2.5 (95% CI, 1.1 to 5.8) in another consisting of stage I tumors. Adenocarcinoma with the 50-gene signature from squamous cell carcinoma had an elevated risk of 3.5 (95% CI, 1.4 to 9.0) after adjustment for conventional predictors. Squamous cell carcinoma with this high risk signature had an adjusted risk of 1.8 (95% CI, 0.7 to 4.6). Despite the little overlap in individual genes, the two gene signatures had significant functional connectedness in molecular pathways. Conclusion Two non-overlapping but functionally related gene expression signatures provide consistently improved survival prediction for NSCLC regardless of histologic cell type. Multiple sets of genes may exist for NSCLC with predictive value, but ones with independent predictive value beyond clinical predictors will be required for clinical translation.


Author(s):  
Yen-Ming Tseng ◽  
Hsi-Shan Huang ◽  
Chen JiQuan ◽  
Wang Xuefei ◽  
Chen Han ◽  
...  

According to IEEE-1159 definition and classification of power quality of applying at supplyside and demand-side of power system to evaluate the power quality of high voltage keycustomer and using the voltage events record (VER) in low voltage level supply. Which voltage parameter set of VER with specific software Event View software by Fluke and can collect event data, plot graphic of event data sets that can using the data mining skill to analysis the voltage swell, voltage sag, power outage, frequency events and three phase load unbalance of high voltage key-customer. By abnormal voltage event statistics with discriminated by quantization of high voltage key-customer which can evaluate the own power quality and an also provide reference for quality requirements for related semiconductor factories with higher demand that can avoid the significant effect the economic losses.


10.29007/fbh3 ◽  
2018 ◽  
Author(s):  
Xiaohan Li ◽  
Patrick Willems

Urban flood pre-warning decisions made upon urban flood modeling is crucial for human and property management in urban area. However, urbanization, changing environmental conditions and climate change are challenging urban sewer models for their adaptability. While hydraulic models are capable of making accurate flood predictions, they are less flexible and more computationally expensive compared with conceptual models, which are simpler and more efficient. In the era of exploding data availability and computing techniques, data-driven models are gaining popularity in urban flood modelling, but meanwhile suffer from data sparseness. To overcome this issue, a hybrid urban flood modeling approach is proposed in this study. It incorporates a conceptual model to account for the dominant sewer hydrological processes and a logistic regression model able to predict the probabilities of flooding on a sub-urban scale. This approach is demonstrated for a highly urbanized area in Antwerp, Belgium. After comparison with a 1D/0D hydrodynamic model, its ability is shown with promising results to make probabilistic flood predictions, regardless of rainfall types or seasonal variation. In addition, the model has higher tolerance on data input quality and is fully adaptive for real time applications.


2021 ◽  
Author(s):  
Tobias Pilz

<p>The megacity of Lagos, Nigeria, is subject to recurrent severe flood events as a consequence of extreme rainfall. In addition, climate change might exacerbate this problem by increasing rainfall intensities. To study the hazard of pluvial flooding in urban areas, several complex hydraulic models exist with a high demand in terms of required input data, manual preprocessing, and computational power. However, for many regions in the world only insufficient local information is available. Moreover, the complexity of model setup prevents reproducible model initialisation and application. This conference contribution addresses these issues by an example application of the complex hydrodynamic model TELEMAC-2D for the city of Lagos. The complex initialisation procedure is simplified by the new package ‘telemac’ for the statistical environment R. A workflow will be presented that illustrates the functionality of the package and the use of publicly available information, such as free DEMs and Openstreetmap data to cope with the problem of insufficient local information. By further analysis and visualisation procedures along the workflow the increasing hazard of pluvial flooding for Lagos is shown. The workflow makes model initialisation, application, and the analysis of results reproducible and applicable to other regions with a relatively low need for manual user interventions and without additional software other than R and TELEMAC-2D.</p>


Author(s):  
Uzma Raja ◽  
Marietta J. Tretter

Open Source Software (OSS) has reached new levels of sophistication and acceptance by users and commercial software vendors. This research creates tests and validates a model for predicting successful development of OSS projects. Widely available archival data was used for OSS projects from Sourceforge.net. The data is analyzed with multiple Data Mining techniques. Initially three competing models are created using Logistic Regression, Decision Trees and Neural Networks. These models are compared for precision and are refined in several phases. Text Mining is used to create new variables that improve the predictive power of the models. The final model is chosen based on best fit to separate training and validation data sets and the ability to explain the relationship among variables. Model robustness is determined by testing it on a new dataset extracted from the SF repository. The results indicate that end-user involvement, project age, functionality, usage, project management techniques, project type and team communication methods have a significant impact on the development of OSS projects.


2014 ◽  
Vol 8 (2) ◽  
pp. 471-485 ◽  
Author(s):  
S. Jörg-Hess ◽  
F. Fundel ◽  
T. Jonas ◽  
M. Zappa

Abstract. Gridded snow water equivalent (SWE) data sets are valuable for estimating the snow water resources and verify different model systems, e.g. hydrological, land surface or atmospheric models. However, changing data availability represents a considerable challenge when trying to derive consistent time series for SWE products. In an attempt to improve the product consistency, we first evaluated the differences between two climatologies of SWE grids that were calculated on the basis of data from 110 and 203 stations, respectively. The "shorter" climatology (2001–2009) was produced using 203 stations (map203) and the "longer" one (1971–2009) 110 stations (map110). Relative to map203, map110 underestimated SWE, especially at higher elevations and at the end of the winter season. We tested the potential of quantile mapping to compensate for mapping errors in map110 relative to map203. During a 9 yr calibration period from 2001 to 2009, for which both map203 and map110 were available, the method could successfully refine the spatial and temporal SWE representation in map110 by making seasonal, regional and altitude-related distinctions. Expanding the calibration to the full 39 yr showed that the general underestimation of map110 with respect to map203 could be removed for the whole winter. The calibrated SWE maps fitted the reference (map203) well when averaged over regions and time periods, where the mean error is approximately zero. However, deviations between the calibrated maps and map203 were observed at single grid cells and years. When we looked at three different regions in more detail, we found that the calibration had the largest effect in the region with the highest proportion of catchment areas above 2000 m a.s.l. and that the general underestimation of map110 compared to map203 could be removed for the entire snow season. The added value of the calibrated SWE climatology is illustrated with practical examples: the verification of a hydrological model, the estimation of snow resource anomalies and the predictability of runoff through SWE.


Water ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1562
Author(s):  
Gamze Koç ◽  
Theresia Petrow ◽  
Annegret Thieken

The most severe flood events in Turkey were determined for the period 1960–2014 by considering the number of fatalities, the number of affected people, and the total economic losses as indicators. The potential triggering mechanisms (i.e., atmospheric circulations and precipitation amounts) and aggravating pathways (i.e., topographic features, catchment size, land use types, and soil properties) of these 25 events were analyzed. On this basis, a new approach was developed to identify the main influencing factor per event and to provide additional information for determining the dominant flood occurrence pathways for severe floods. The events were then classified through hierarchical cluster analysis. As a result, six different clusters were found and characterized. Cluster 1 comprised flood events that were mainly influenced by drainage characteristics (e.g., catchment size and shape); Cluster 2 comprised events aggravated predominantly by urbanization; steep topography was identified to be the dominant factor for Cluster 3; extreme rainfall was determined as the main triggering factor for Cluster 4; saturated soil conditions were found to be the dominant factor for Cluster 5; and orographic effects of mountain ranges characterized Cluster 6. This study determined pathway patterns of the severe floods in Turkey with regard to their main causal or aggravating mechanisms. Accordingly, geomorphological properties are of major importance in large catchments in eastern and northeastern Anatolia. In addition, in small catchments, the share of urbanized area seems to be an important factor for the extent of flood impacts. This paper presents an outcome that could be used for future urban planning and flood risk prevention studies to understand the flood mechanisms in different regions of Turkey.


Soil Research ◽  
2001 ◽  
Vol 39 (5) ◽  
pp. 1015 ◽  
Author(s):  
T. H. Webb ◽  
L. R. Lilburne ◽  
G. S. Francis

Simulation models require testing and calibration prior to their application to regions beyond those involved in their development. This paper reports on the calibration and testing of the groundwater loading effects of agricultural management systems (GLEAMS) model for the simulation of nitrate leaching under cropping in Canterbury. The GLEAMS model was first calibrated using crop and nitrogen leaching data collected from 4 consecutive years (1991–94) of spring-sown cereals following the ploughing of a temporary grass/clover pasture. Nitrate leaching losses were calculated from a combination of measured soil-solution nitrate concentration at 0.6 m depth, estimated drainage, and mineral N from soil cores. These calculated leached-N values were then used to calibrate the GLEAMS model. Parameters controlling denitrification and mineralisation rate in the model needed modification to provide sufficient mineral N for plant growth and nitrate leaching. The calibrated model was then tested against 3 independent validation data sets that were collected over 3 years from an adjacent experimental site, under the same management practices. Predictions from the calibrated GLEAMS model provided close agreement with measured values of mineralisation and leached N for the validation data sets. The amount of leached N averaged 43 kg N/ha.year and varied from 14 to 104 kg N/ha.year. The annual amount of drainage accounted for 97% of the variance in leached N, but the period in arable cropping was poorly correlated with leached N.


2018 ◽  
Author(s):  
Carla Márquez-Luna ◽  
Steven Gazal ◽  
Po-Ru Loh ◽  
Samuel S. Kim ◽  
Nicholas Furlotte ◽  
...  

AbstractGenetic variants in functional regions of the genome are enriched for complex trait heritability. Here, we introduce a new method for polygenic prediction, LDpred-funct, that leverages trait-specific functional priors to increase prediction accuracy. We fit priors using the recently developed baseline-LD model, which includes coding, conserved, regulatory and LD-related annotations. We analytically estimate posterior mean causal effect sizes and then use cross-validation to regularize these estimates, improving prediction accuracy for sparse architectures. LDpred-funct attained higher prediction accuracy than other polygenic prediction methods in simulations using real genotypes. We applied LDpred-funct to predict 21 highly heritable traits in the UK Biobank. We used association statistics from British-ancestry samples as training data (avg N=373K) and samples of other European ancestries as validation data (avg N=22K), to minimize confounding. LDpred-funct attained a +4.6% relative improvement in average prediction accuracy (avg prediction R2=0.144; highest R2=0.413 for height) compared to SBayesR (the best method that does not incorporate functional information). For height, meta-analyzing training data from UK Biobank and 23andMe cohorts (total N=1107K; higher heritability in UK Biobank cohort) increased prediction R2 to 0.431. Our results show that incorporating functional priors improves polygenic prediction accuracy, consistent with the functional architecture of complex traits.


Sign in / Sign up

Export Citation Format

Share Document