scholarly journals A regional comparative analysis of empirical and theoretical flood peak-volume relationships

2016 ◽  
Vol 64 (4) ◽  
pp. 367-381 ◽  
Author(s):  
Ján Szolgay ◽  
Ladislav Gaál ◽  
Tomáš Bacigál ◽  
Silvia Kohnová ◽  
Kamila Hlavčová ◽  
...  

Abstract This paper analyses the bivariate relationship between flood peaks and corresponding flood event volumes modelled by empirical and theoretical copulas in a regional context, with a focus on flood generation processes in general, the regional differentiation of these and the effect of the sample size on reliable discrimination among models. A total of 72 catchments in North-West of Austria are analysed for the period 1976–2007. From the hourly runoff data set, 25 697 flood events were isolated and assigned to one of three flood process types: synoptic floods (including long- and short-rain floods), flash floods or snowmelt floods (both rain-on-snow and snowmelt floods). The first step of the analysis examines whether the empirical peak-volume copulas of different flood process types are regionally statistically distinguishable, separately for each catchment and the role of the sample size on the strength of the statements. The results indicate that the empirical copulas of flash floods tend to be different from those of the synoptic and snowmelt floods. The second step examines how similar are the empirical flood peak-volume copulas between catchments for a given flood type across the region. Empirical copulas of synoptic floods are the least similar between the catchments, however with the decrease of the sample size the difference between the performances of the process types becomes small. The third step examines the goodness-of-fit of different commonly used copula types to the data samples that represent the annual maxima of flood peaks and the respective volumes both regardless of flood generating processes (the traditional engineering approach) and also considering the three process-based classes. Extreme value copulas (Galambos, Gumbel and Hüsler-Reiss) show the best performance both for synoptic and flash floods, while the Frank copula shows the best performance for snowmelt floods. It is concluded that there is merit in treating flood types separately when analysing and estimating flood peak-volume dependence copulas; however, even the enlarged dataset gained by the process-based analysis in this study does not give sufficient information for a reliable model choice for multivariate statistical analysis of flood peaks and volumes.

2016 ◽  
Vol 46 (3) ◽  
pp. 155-178 ◽  
Author(s):  
Ladislav Gaál ◽  
Ján Szolgay ◽  
Tomáš Bacigál ◽  
Silvia Kohnová ◽  
Kamila Hlavčová ◽  
...  

Abstract This paper analyses the bivariate relationship between flood peaks and corresponding flood event volumes modelled by empirical copulas in a regional context in the North-West of Austria. Flood data of a total of 69 catchments in the region are analysed for the period 1976–2007. In order to increase the sample size and the homogeneity of the samples for the statistical analysis, 24872 hydrologically independent flood events were isolated and assigned to one of three flood process types: synoptic floods, flash floods or snowmelt floods in contrary to the more traditional engineering approach of selecting annual maxima of flood peaks and corresponding flood volumes. The first major part of the paper examines whether the empirical peak-volume copulas of different flood process types are statistically distinguishable, separately for each catchment. The results indicate that the empirical copulas of flash floods tend to be different from those of the synoptic and snowmelt floods in the target region. The second part examines how similar are the empirical flood peak-volume copulas between catchments for a given flood type. For the majority of catchment pairs, the empirical copulas of all flood types are indeed statistically similar. The flash floods show the largest degree of spatial heterogeneity. It is concluded that there is merit in treating flood types separately and in pooling events of the same type in a region when analysing and estimating flood peak-volume dependence copulas; however, the sample size of the analysed events is a limiting factor in spite of the introduced event selection procedure.


2016 ◽  
Vol 46 (4) ◽  
pp. 245-268 ◽  
Author(s):  
Silvia Kohnová ◽  
Ladislav Gaál ◽  
Tomáš Bacigál ◽  
Ján Szolgay ◽  
Kamila Hlavčová ◽  
...  

Abstract The case study aims at selecting optimal bivariate copula models of the relationships between flood peaks and flood volumes from a regional perspective with a particular focus on flood generation processes. Besides the traditional approach that deals with the annual maxima of flood events, the current analysis also includes all independent flood events. The target region is located in the northwest of Austria; it consists of 69 small and mid-sized catchments. On the basis of the hourly runoff data from the period 1976- 2007, independent flood events were identified and assigned to one of the following three types of flood categories: synoptic floods, flash floods and snowmelt floods. Flood events in the given catchment are considered independent when they originate from different synoptic situations. Nine commonly-used copula types were fitted to the flood peak - flood volume pairs at each site. In this step, two databases were used: i) a process-based selection of all the independent flood events (three data samples at each catchment) and ii) the annual maxima of the flood peaks and the respective flood volumes regardless of the flood processes (one data sample per catchment). The goodness-of-fit of the nine copula types was examined on a regional basis throughout all the catchments. It was concluded that (1) the copula models for the flood processes are discernible locally; (2) the Clayton copula provides an unacceptable performance for all three processes as well as in the case of the annual maxima; (3) the rejection of the other copula types depends on the flood type and the sample size; (4) there are differences in the copulas with the best fits: for synoptic and flash floods, the best performance is associated with the extreme value copulas; for snowmelt floods, the Frank copula fits the best; while in the case of the annual maxima, no firm conclusion could be made due to the number of copulas with similarly acceptable overall performances. The general conclusion from this case study is that treating flood processes separately is beneficial; however, the usually available sample size in such real life studies is not sufficient to give generally valid recommendations for engineering design tasks.


2011 ◽  
Vol 42 (2-3) ◽  
pp. 193-216 ◽  
Author(s):  
Hemant Chowdhary ◽  
Luis A. Escobar ◽  
Vijay P. Singh

Multivariate flood frequency analysis, involving flood peak flow, volume and duration, has been traditionally accomplished by employing available functional bivariate and multivariate frequency distributions that have a restriction on the marginals to be from the same family of distributions. The copula concept overcomes this restriction by allowing a combination of arbitrarily chosen marginal types. It also provides a wider choice of admissible dependence structure as compared to the conventional approach. The availability of a vast variety of copula types makes the selection of an appropriate copula family for different hydrological applications a non-trivial task. Graphical and analytic goodness-of-fit tests for testing the suitability of copulas are beginning to evolve and are being developed; there is limited experience of their usage at present, especially in the hydrological field. This paper provides a step-wise procedure for copula selection and illustrates its application to bivariate flood frequency analysis, involving flood peak flow and volume data. Several graphical procedures, tail dependence characteristics, and formal goodness-of-fit tests involving a parametric bootstrap-based technique are considered while investigating the relative applicability of six copula families. The Clayton copula has been identified as a valid model for the particular flood peak flow and volume data set considered in the study.


Author(s):  
Ján Szolgay ◽  
Ladislav Gaál ◽  
Tomáš Bacigál ◽  
Silvia Kohnová ◽  
Kamila Hlavčová ◽  
...  

Abstract. Recent research on the bivariate flood peak/volume frequency analysis has mainly focused on the statistical aspects of the use of various copula models. The interplay of climatic and catchment processes in discriminating among these models has attracted less interest. In the paper we analyse the influence of climatic and hydrological controls on flood peak and volume relationships and their models, which are based on the concept of comparative hydrology in the catchments of a selected region in Austria. Independent flood events have been isolated and assigned to one of the three types of flood processes: synoptic floods, flash floods and snowmelt floods. First, empirical copulas are regionally compared in order to verify whether any flood processes are discernible in terms of the corresponding bivariate flood-peak relationships. Next the types of copulas, which are frequently used in hydrology are fitted, and their goodness-of-fit is examined in a regional scope. The spatial similarity of copulas and their rejection rate, depending on the flood type, region, and sample size are examined, too. In particular, the most remarkable difference is observed between flash floods and the other two types of flood. It is concluded that treating flood processes separately in such an analysis is beneficial, both hydrologically and statistically, since flood processes and the relationships associated with them are discernible both locally and regionally in the pilot region. However, uncertainties inherent in the copula-based bivariate frequency analysis itself (caused, among others, also by the relatively small sample sizes for consistent copula model selection, upper tail dependence characterization and reliable predictions) may not be overcome in the scope of such a regional comparative analysis.


Mathematics ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 788
Author(s):  
Jurgita Arnastauskaitė ◽  
Tomas Ruzgas ◽  
Mindaugas Bražėnas

A goodness-of-fit test is a frequently used modern statistics tool. However, it is still unclear what the most reliable approach is to check assumptions about data set normality. A particular data set (especially with a small number of observations) only partly describes the process, which leaves many options for the interpretation of its true distribution. As a consequence, many goodness-of-fit statistical tests have been developed, the power of which depends on particular circumstances (i.e., sample size, outlets, etc.). With the aim of developing a more universal goodness-of-fit test, we propose an approach based on an N-metric with our chosen kernel function. To compare the power of 40 normality tests, the goodness-of-fit hypothesis was tested for 15 data distributions with 6 different sample sizes. Based on exhaustive comparative research results, we recommend the use of our test for samples of size .


Author(s):  
Michael schatz ◽  
Joachim Jäger ◽  
Marin van Heel

Lumbricus terrestris erythrocruorin is a giant oxygen-transporting macromolecule in the blood of the common earth worm (worm "hemoglobin"). In our current study, we use specimens (kindly provided by Drs W.E. Royer and W.A. Hendrickson) embedded in vitreous ice (1) to avoid artefacts encountered with the negative stain preparation technigue used in previous studies (2-4).Although the molecular structure is well preserved in vitreous ice, the low contrast and high noise level in the micrographs represent a serious problem in image interpretation. Moreover, the molecules can exhibit many different orientations relative to the object plane of the microscope in this type of preparation. Existing techniques of analysis requiring alignment of the molecular views relative to one or more reference images often thus yield unsatisfactory results.We use a new method in which first rotation-, translation- and mirror invariant functions (5) are derived from the large set of input images, which functions are subsequently classified automatically using multivariate statistical techniques (6). The different molecular views in the data set can therewith be found unbiasedly (5). Within each class, all images are aligned relative to that member of the class which contributes least to the classes′ internal variance (6). This reference image is thus the most typical member of the class. Finally the aligned images from each class are averaged resulting in molecular views with enhanced statistical resolution.


2014 ◽  
Vol 13 (1) ◽  
Author(s):  
Konrad Nering

AbstractThis paper describes a fully functional short-term flood prediction system. Its effect has been tested on watershed of Lubieńka river in Małopolska. To use this system it must have a data set also described in this paper. A modification of the system to adopt for predicting flash floods was described. Full operation of the system is shown on example of real flood on Lubieńka river in June 2011.


Author(s):  
Michael S. Danielson

The first empirical task is to identify the characteristics of municipalities which US-based migrants have come together to support financially. Using a nationwide, municipal-level data set compiled by the author, the chapter estimates several multivariate statistical models to compare municipalities that did not benefit from the 3x1 Program for Migrants with those that did, and seeks to explain variation in the number and value of 3x1 projects. The analysis shows that migrants are more likely to contribute where migrant civil society has become more deeply institutionalized at the state level and in places with longer histories as migrant-sending places. Furthermore, the results suggest that political factors are at play, as projects have disproportionately benefited states and municipalities where the PAN had a stronger presence, with fewer occurring elsewhere.


2019 ◽  
Vol 73 (8) ◽  
pp. 893-901
Author(s):  
Sinead J. Barton ◽  
Bryan M. Hennelly

Cosmic ray artifacts may be present in all photo-electric readout systems. In spectroscopy, they present as random unidirectional sharp spikes that distort spectra and may have an affect on post-processing, possibly affecting the results of multivariate statistical classification. A number of methods have previously been proposed to remove cosmic ray artifacts from spectra but the goal of removing the artifacts while making no other change to the underlying spectrum is challenging. One of the most successful and commonly applied methods for the removal of comic ray artifacts involves the capture of two sequential spectra that are compared in order to identify spikes. The disadvantage of this approach is that at least two recordings are necessary, which may be problematic for dynamically changing spectra, and which can reduce the signal-to-noise (S/N) ratio when compared with a single recording of equivalent duration due to the inclusion of two instances of read noise. In this paper, a cosmic ray artefact removal algorithm is proposed that works in a similar way to the double acquisition method but requires only a single capture, so long as a data set of similar spectra is available. The method employs normalized covariance in order to identify a similar spectrum in the data set, from which a direct comparison reveals the presence of cosmic ray artifacts, which are then replaced with the corresponding values from the matching spectrum. The advantage of the proposed method over the double acquisition method is investigated in the context of the S/N ratio and is applied to various data sets of Raman spectra recorded from biological cells.


Sign in / Sign up

Export Citation Format

Share Document