A practical, objective, robust technique to directly estimate time of concentration

Author(s):  
Giulia Giani ◽  
Miguel Angel Rico-Ramirez ◽  
Ross Woods

<p>Time of concentration is one of the key time variables in hydrology and it is essential for hydrograph design and hydrological modelling. Uncertainty in its estimation can cause errors in peak discharge rate and timing of flood events.</p><p>A unique recognized definition and methodology for its estimate is lacking and the multiple definitions and estimation procedures available in literature can give numerical prediction which can differ by up to 500% (Grimaldi et al., 2012). This result is not surprising given the high subjectivity of the traditionally used method to directly estimate time of concentration, also used for the calibration of the widely applied empirical formulae.</p><p>Given the importance of this time parameter in hydrology and the lack of a recognized and easily reproducible procedure for its estimate, here we propose a practical, objective, robust methodology to directly estimate time of concentration from rainfall and streamflow observations only. It’s a timeseries analysis technique used already in the Economics field (Kristoufek, 2014), that have been adapted to estimate time of concentration.</p><p>Compared to the traditionally used method, which is event based and requires hyetograph and hydrograph separation, the proposed methodology is designed to find the time delay from the original continuous timeseries but can also be applied to individual events by creating a timeseries of copies of the same event.</p><p>In the first place, the median of time of concentration distribution with the proposed methodology has been evaluated against the one with the traditionally used one in 79 catchments across the UK, showing that in most of the sites estimates coming from the two methods are very similar (correlation value of 0.82). This means that it is possible to avoid the separation of the hydrograph, required by the traditionally used method, which is a highly subjective procedure.</p><p>Secondly, we show that, when considering the proposed methodology only, for each catchment the time of concentration estimate using the continuous timeseries has a small discrepancy compared to the median of the time of concentration distribution of the single events estimates (correlation value of 0.94). Therefore, rainfall-streamflow events selection is not necessary and a reliable estimate of time of concentration can be obtained by applying the proposed methodology on the continuous timeseries at once, reducing the computational cost.</p><p>The proposed timeseries analysis technique is easy to automate, reproducible and make possible to objectively compare time of concentration estimates in all the catchments where the resolution of rainfall and streamflow timeseries is high enough to capture the runoff process.</p>

2006 ◽  
Vol 88 (6) ◽  
pp. 540-542 ◽  
Author(s):  
TDA Cosker ◽  
A Ghandour ◽  
T Naresh ◽  
K Visvakumar ◽  
SR Johnson

INTRODUCTION A consultant-led service for trauma in the UK has become the accepted norm. Practice in fracture clinics may vary widely between consultants and has an impact on the number of patients seen and, therefore, the time devoted to each patient. PATIENTS AND METHODS A total of 945 patients attending our unit's fracture clinics were analysed over a 6-week period, representing one complete cycle of our trauma system. RESULTS The overall discharge rate was 38% but this differed significantly between consultants. Patients re-presenting for the same complaint were evenly distributed between those discharging aggressively and those re-reviewing regularly. CONCLUSIONS Re-reviewing patients has a significant impact on the number of patients seen in future clinics and, therefore, the time that can be devoted to each patient, individual consultant workload and teaching of junior staff. Since the re-presentation rate between those discharging aggressively and those re-reviewing more frequently was the same, discharge protocols are recommended for common trauma conditions to standardise the process. Specialist clinics are recommended for more complex trauma cases.


Water ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 1779 ◽  
Author(s):  
Weilin Xu ◽  
Chunqi Chen ◽  
Wangru Wei

There is a lack of knowledge on the air concentration distribution in plunge pools affected by aerated jets. A set of physical experiments was performed on vertical submerged aerated jet flows impinging a plunge pool. The air concentration distribution in the plunge pool was analyzed under different inflow air concentrations, flow velocities, and discharge rate conditions. The experimental results show that the air concentration distribution follows a power-law along the jet axis, and it is independent of the initial flow conditions. A new hypothetical analysis model was proposed for air diffusion in the plunge pool, that is, the air concentration distribution in the plunge pool is superposed by the lateral diffusion of three stages of the aerated jet motion. A set of formulas was proposed to predict the air concentration distribution in the plunge pool, the results of which showed good agreement with the experimental data.


2011 ◽  
Vol 4 (5) ◽  
pp. 965-972 ◽  
Author(s):  
A. E. Bourassa ◽  
C. A. McLinden ◽  
C. E. Sioris ◽  
S. Brohede ◽  
A. F. Bathgate ◽  
...  

Abstract. The feasibility of retrieving vertical profiles of NO2 from space-based measurements of limb scattered sunlight has been demonstrated using several different data sets since the 1980's. The NO2 data product routinely retrieved from measurements made by the Optical Spectrograph and InfraRed Imaging System (OSIRIS) instrument onboard the Odin satellite uses a spectral fitting technique over the 437 to 451 nm range, over which there are 36 individual wavelength measurements. In this work we present a proof of concept technique for the retrieval of NO2 using only 4 of the 36 OSIRIS measurements in this wavelength range, which reduces the computational cost by almost an order of magnitude. The method is an adaptation of a triplet analysis technique that is currently used for the OSIRIS retrievals of ozone at Chappuis band wavelengths. The results obtained are shown to be in very good agreement with the spectral fit method, and provide an important alternative for applications where the computational burden is very high. Additionally this provides a baseline for future instrument design in terms of cost effectiveness and reducing spectral range requirements.


2020 ◽  
Author(s):  
Anna E. Sikorska-Senoner ◽  
Bettina Schaefli ◽  
Jan Seibert

Abstract. For extreme flood estimation, simulation-based approaches represent an interesting alternative to purely statistical approaches, particularly if hydrograph shapes are required. Such simulation-based methods are adapted within continuous simulation frameworks that rely on statistical analyses of continuous streamflow time series derived from a hydrologic model fed with long precipitation time series. These frameworks are, however, affected by high computational demands, particularly if floods with return periods > 1000 years are of interest or if modelling uncertainty due to different sources (meteorological input or hydrologic model) is to be quantified. Here, we propose three methods for reducing the computational requirements for the hydrological simulations for extreme flood estimation, so that long streamflow time series can be analysed at a reduced computational cost. These methods rely on simulation of annual maxima and on analyzing their simulated range to downsize the hydrological parameter ensemble to a small number suitable for continuous simulation frameworks. The methods are tested in a Swiss catchment with 10 000 years of synthetic streamflow data simulated with a weather generator. Our results demonstrate the reliability of the proposed downsizing methods for robust simulations of extreme floods with uncertainty. The methods are readily transferable to other situations where ensemble simulations are needed.


2021 ◽  
Author(s):  
Giulia Giani ◽  
Miguel Angel Rico-Ramirez ◽  
Ross Woods

<p>A widely accepted objective methodology to select individual rainfall-streamflow events is missing and this makes it difficult to synthesize findings from independent research initiatives. In fact, the selection of individual events is a fundamental step in many hydrological studies, but the importance and impact of the choices made at this stage are largely unrecognised.</p><p>The event selection methods found in the literature start by looking at either the rainfall timeseries or the streamflow timeseries. Moreover, most of the methodologies involve hydrograph separation, which is a highly uncertain step and can be performed using many different algorithms. Further increasing the subjectivity of the procedure, a wide range of ad hoc conditions are usually applied (e.g. peak-over-threshold, minimum duration of rainfall event, minimum duration of dry spell, minimum rainfall intensity…).</p><p>For these reasons, we present a new methodology to extract rainfall-streamflow events which minimizes the conceptual hypotheses and user’s choices, and bases the identification of the events mainly on the joint fluctuations of the two signals. The proposed methodology builds upon a timeseries analysis technique to estimate catchment response time, the Detrending Moving-average Cross-correlation Analysis-based method.</p><p>The proposed method has the advantage of looking simultaneously at the evolution in time of rainfall and streamflow timeseries, providing a more systemic detection of events. Moreover, the presented method can easily be adapted to extract events at different time resolutions (provided the resolution is fine enough to capture the delay between the rainfall and streamflow responses).</p><p>Properties of the events extracted with the proposed method are compared with the ones of the events extracted with the most traditional approach (based on hydrograph separation) to show strengths and weaknesses of the two techniques and suggest in which situations the proposed method can be most useful.</p>


Author(s):  
M.J. Anderson

The colonization of microscopic organisms, commonly called a biofilm, was examined on fibreglass panels situated intertidally at Quibray Bay of Botany Bay in New South Wales, Australia. Panels were examined by incident light microscopy, measuring percentage cover, and by a computer image analysis technique, measuring optical density. Optical density was positively correlated with and was therefore a reliable estimate of total percentage cover of the biofilm. Optical density has not been used before in this application and, although some drawbacks are discussed, it is a much more efficient sampling method than microscopic examination of panels.


2014 ◽  
Vol 142 (10) ◽  
pp. 3713-3733 ◽  
Author(s):  
Xinrong Wu ◽  
Wei Li ◽  
Guijun Han ◽  
Shaoqing Zhang ◽  
Xidong Wang

Abstract While fixed covariance localization can greatly increase the reliability of the background error covariance in filtering by suppressing the long-distance spurious correlations evaluated by a finite ensemble, it may degrade the assimilation quality in an ensemble Kalman filter (EnKF) as a result of restricted longwave information. Tuning an optimal cutoff distance is usually very expensive and time consuming, especially for a general circulation model (GCM). Here the authors present an approach to compensate the demerit in fixed localization. At each analysis step, after the standard EnKF is done, a multiple-scale analysis technique is used to extract longwave information from the observational residual (referred to the EnKF ensemble mean). Within a biased twin-experiment framework consisting of a global barotropical spectral model and an idealized observing system, the performance of the new method is examined. Compared to a standard EnKF, the hybrid method is superior when an overly small/large cutoff distance is used, and it has less dependence on cutoff distance. The new scheme is also able to improve short-term weather forecasts, especially when an overly large cutoff distance is used. Sensitivity studies show that caution should be taken when the new scheme is applied to a dense observing system with an overly small cutoff distance in filtering. In addition, the new scheme has a nearly equivalent computational cost to the standard EnKF; thus, it is particularly suitable for GCM applications.


2010 ◽  
Vol 3 (6) ◽  
pp. 5499-5519
Author(s):  
A. E. Bourassa ◽  
C. A. McLinden ◽  
C. E. Sioris ◽  
S. Brohede ◽  
E. J. Llewellyn ◽  
...  

Abstract. The feasibility of retrieving vertical profiles of NO2 from space-based measurements of limb scattered sunlight has been demonstrated using several different data sets since the 1980's. The NO2 data product routinely retrieved from measurements made by the Optical Spectrograph and InfraRed Imaging System (OSIRIS) instrument onboard the Odin satellite uses a spectral fitting technique over the 437 to 451 nm range, over which there are 36 individual wavelength measurements. In this work we present a proof of concept technique for the retrieval of NO2 using only 4 of the 36 OSIRIS measurements in this wavelength range, which reduces the computational cost by almost an order of magnitude. The method is an adaptation of a triplet analysis technique that is currently used for the OSIRIS retrievals of ozone at Chappuis band wavelengths. The results obtained are shown to be in very good agreement with the spectral fit method, and provide an important alternative for two dimensional tomographic algorithms where the computational burden is very high. Additionally this provides a baseline for future instrument design in terms of cost effectiveness and boosting signal to noise by reducing spectral resolution requirements.


2018 ◽  
Author(s):  
Alex Diaz-Papkovich ◽  
Luke Anderson-Trocmé ◽  
Simon Gravel

AbstractGenetic structure in large cohorts results from technical, sampling and demographic variation. Visualisation is therefore a first step in most genomic analyses. However, existing data exploration methods struggle with unbalanced sampling and the many scales of population structure. We investigate an approach to dimension reduction of genomic data that combines principal components analysis (PCA) with uniform manifold approximation and projection (UMAP) to succinctly illustrate population structure in large cohorts and capture their relationships on local and global scales. Using data from large-scale genomic datasets, we demonstrate that PCA-UMAP effectively clusters closely related individuals while placing them in a global continuum of genetic variation. This approach reveals previously overlooked subpopulations within the American Hispanic population and fine-scale relationships between geography, genotypes, and phenotypes in the UK population. This opens new lines of investigation for demographic research and statistical genetics. Given its small computational cost, PCA-UMAP also provides a general-purpose approach to exploratory analysis in population-scale datasets.Author summaryBecause of geographic isolation, individuals tend to be more genetically related to people living nearby than to people living far. This is an example of population structure, a situation where a large population contains subgroups that share more than the average amount of DNA. This structure can tell us about human history, and it can also have a large effect on medical studies. We use a newly developed method (UMAP) to visualize population structure from three genomic datasets. Using genotype data alone, we reveal numerous subgroups related to ancestry and correlated with traits such as white blood cell count, height, and FEV1, a measure used to detect airway obstruction. We demonstrate that UMAP reveals previously unobserved patterns and fine-scale structure. We show that visualizations work especially well in large datasets containing populations with diverse backgrounds, which are rapidly becoming more common, and that unlike other visualization methods, we can preserve intuitive connections between populations that reflect their shared ancestries. The combination of these results and the effectiveness of the strategy on large and diverse datasets make this an important approach for exploratory analysis for geneticists studying ancestral events and phenotype distributions.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. R213-R226 ◽  
Author(s):  
Sérgio A. M. Oliveira ◽  
Igor L. S. Braga ◽  
Murillo B. Lacerda ◽  
Geovane F. Ouverney ◽  
Anderson W. P. de Franco

We have developed the amplitude versus angle full-waveform inversion (AVA-FWI) method. This method considers the complete seismic response of the layered medium, and so it is capable of correctly handling seismic amplitudes from prestack data with a wide angle range. This capability is very important because a reliable estimate of the elastic parameters and the density requires an incidence angle that goes beyond 30°. Our method inputs seismic traces from prestack time-migrated gathers ordered by angle of incidence and works under the local 1D assumption. AVA-FWI is a nonlinear inversion based on forward modeling by the reflectivity method, which substantially increases its computational cost with respect to conventional AVA inversion. To address this problem, we developed an efficient routine for angle gather modeling and a new method for differential seismogram generation that greatly reduces the amount of computation involved in this task. The AVA-FWI method was applied to synthetic data and to a geophysical reservoir characterization case study using the North Viking Graben open data set.


Sign in / Sign up

Export Citation Format

Share Document