scholarly journals A new approach for suspended sediment load calculation based on generated flow discharge considering climate change

Author(s):  
Arash Adib ◽  
Ozgur Kisi ◽  
Shekoofeh Khoramgah ◽  
Hamid Reza Gafouri ◽  
Ali Liaghat ◽  
...  

Abstract Use of general circulation models (GCMs) is common for forecasting of hydrometric and meteorological parameters, but the uncertainty of these models is high. This study developed a new approach for calculation of suspended sediment load (SSL) using historical flow discharge data and SSL data of the Idanak hydrometric station on the Marun River (in the southwest of Iran) from 1968 to 2014. This approach derived sediment rating relation by observed data and determined trend of flow discharge time series data by Mann-Kendall nonparametric trend (MK) test and Theil-Sen approach (TSA). Then, the SSL was calculated for a future period based on forecasted flow discharge data by TSA. Also, one hundred annual and monthly flow discharge time series data (for the duration of 40 years) were generated by the Markov chain and the Monte Carlo (MC) methods and it calculated 90% of total prediction uncertainty bounds for flow discharge time series data by Latin Hypercube Sampling (LHS) on Monte Carlo (MC). It is observed that flow discharge and SSL will increase in summer and will reduce in spring. Also, the annual amount of SSL will reduce from 2,811.15 ton/day to 1,341.25 and 962.05 ton/day in the near and far future, respectively.

2021 ◽  
Vol 1 (1) ◽  
pp. 13-20
Author(s):  
Meiske Shabrina Pesik ◽  
Didi Suhaedi ◽  
M. Yusuf Fajar

Abstract. The Cikeruh River is a source of water for the people who live in the watershed area. The shift in land management has resulted in problems in the availability of water resources. As a policy to overcome this problem, an estimation of the flow rate of the Cikeruh river was carried out. Cikeruh river flow discharge data is observational data with a monthly period included in time series data or time series data. This data has a seasonal pattern so that the method that can be used to predict the discharge data is the Thomas-Fiering Method. To estimate the discharge data for 2018, the Cikeruh river flow discharge data were used every month from 2011 to 2017 as many as 84 historical data. Then after getting the results of the 2018 debit data estimation, the mean error value calculated using Thomas-Fiering was 0.0291. Abstrak. Sungai Cikeruh merupakan sumber air bagi masyarakat yang bermukim di wilayah daerah aliran sungai. Terjadinya pergeseran tata kelola lahan mengakibatkan permasalahan ketersediaan sumber daya air. Sebagai suatu kebijakan untuk mengatasi permasalahan tersebut maka dilakukan pendugaan debit aliran sungai Cikeruh. Data debit aliran sungai Cikeruh merupakan data pengamatan dengan periode bulanan yang termasuk dalam data time series atau data runtun waktu. Data ini memiliki pola  musiman sehingga metode yang dapat digunakan untuk membuat pendugaan data debit adalah Metode Thomas-Fiering. Untuk menduga data debit tahun 2018 digunakan data debit aliran sungai Cikeruh setiap bulannya dari tahun 2011 sampai 2017 sebanyak 84 data historis. Kemudian setelah mendapatkan hasil pendugaan data debit tahun 2018 didapatkan nilai Mean Error perhitungan menggunakan Thomas-Fiering adalah 0.0291.


2018 ◽  
Vol 7 (2) ◽  
pp. 139-150 ◽  
Author(s):  
Adekunlé Akim Salami ◽  
Ayité Sénah Akoda Ajavon ◽  
Mawugno Koffi Kodjo ◽  
Seydou Ouedraogo ◽  
Koffi-Sa Bédja

In this article, we introduced a new approach based on graphical method (GPM), maximum likelihood method (MLM), energy pattern factor method (EPFM), empirical method of Justus (EMJ), empirical method of Lysen (EML) and moment method (MOM) using the even or odd classes of wind speed series distribution histogram with 1 m/s as bin size to estimate the Weibull parameters. This new approach is compared on the basis of the resulting mean wind speed and its standard deviation using seven reliable statistical indicators (RPE, RMSE, MAPE, MABE, R2, RRMSE and IA). The results indicate that this new approach is adequate to estimate Weibull parameters and can outperform GPM, MLM, EPF, EMJ, EML and MOM which uses all wind speed time series data collected for one period. The study has also found a linear relationship between the Weibull parameters K and C estimated by MLM, EPFM, EMJ, EML and MOM using odd or even class wind speed time series and those obtained by applying these methods to all class (both even and odd bins) wind speed time series. Another interesting feature of this approach is the data size reduction which eventually leads to a reduced processing time.Article History: Received February 16th 2018; Received in revised form May 5th 2018; Accepted May 27th 2018; Available onlineHow to Cite This Article: Salami, A.A., Ajavon, A.S.A., Kodjo, M.K. , Ouedraogo, S. and Bédja, K. (2018) The Use of Odd and Even Class Wind Speed Time Series of Distribution Histogram to Estimate Weibull Parameters. Int. Journal of Renewable Energy Development 7(2), 139-150.https://doi.org/10.14710/ijred.7.2.139-150


2021 ◽  
Vol 24 ◽  
pp. 100618
Author(s):  
Philipe Riskalla Leal ◽  
Ricardo José de Paula Souza e Guimarães ◽  
Fábio Dall Cortivo ◽  
Rayana Santos Araújo Palharini ◽  
Milton Kampel

2021 ◽  
Author(s):  
Christoph Klingler ◽  
Mathew Herrnegger ◽  
Frederik Kratzert ◽  
Karsten Schulz

<p>Open large-sample datasets are important for various reasons: i) they enable large-sample analyses, ii) they democratize access to data, iii) they enable large-sample comparative studies and foster reproducibility, and iv) they are a key driver for recent developments of machine-learning based modelling approaches.</p><p>Recently, various large-sample datasets have been released (e.g. different country-specific CAMELS datasets), however, all of them contain only data of individual catchments distributed across entire countries and not connected river networks.</p><p>Here, we present LamaH, a new dataset covering all of Austria and the foreign upstream areas of the Danube, spanning a total of 170.000 km² in 9 different countries with discharge observations for 882 gauges. The dataset also includes 15 different meteorological time series, derived from ERA5-Land, for two different basin delineations: First, corresponding to the entire upstream area of a particular gauge, and second, corresponding only to the area between a particular gauge and its upstream gauges. The time series data for both, meteorological and discharge data, is included in hourly and daily resolution and covers a period of over 35 years (with some exceptions in discharge data for a couple of gauges).</p><p>Sticking closely to the CAMELS datasets, LamaH also contains more than 60 catchment attributes, derived for both types of basin delineations. The attributes include climatic, hydrological and vegetation indices, land cover information, as well as soil, geological and topographical properties. Additionally, the runoff gauges are classified by over 20 different attributes, including information about human impact and indicators for data quality and completeness. Lastly, LamaH also contains attributes for the river network itself, like gauge topology, stream length and the slope between two sequential gauges.</p><p>Given the scope of LamaH, we hope that this dataset will serve as a solid database for further investigations in various tasks of hydrology. The extent of data combined with the interconnected river network and the high temporal resolution of the time series might reveal deeper insights into water transfer and storage with appropriate methods of modelling.</p>


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1078
Author(s):  
Ruxandra Stoean ◽  
Catalin Stoean ◽  
Miguel Atencia ◽  
Roberto Rodríguez-Labrada ◽  
Gonzalo Joya

Uncertainty quantification in deep learning models is especially important for the medical applications of this complex and successful type of neural architectures. One popular technique is Monte Carlo dropout that gives a sample output for a record, which can be measured statistically in terms of average probability and variance for each diagnostic class of the problem. The current paper puts forward a convolutional–long short-term memory network model with a Monte Carlo dropout layer for obtaining information regarding the model uncertainty for saccadic records of all patients. These are next used in assessing the uncertainty of the learning model at the higher level of sets of multiple records (i.e., registers) that are gathered for one patient case by the examining physician towards an accurate diagnosis. Means and standard deviations are additionally calculated for the Monte Carlo uncertainty estimates of groups of predictions. These serve as a new collection where a random forest model can perform both classification and ranking of variable importance. The approach is validated on a real-world problem of classifying electrooculography time series for an early detection of spinocerebellar ataxia 2 and reaches an accuracy of 88.59% in distinguishing between the three classes of patients.


2013 ◽  
Vol 280 (1768) ◽  
pp. 20131389 ◽  
Author(s):  
Jiqiu Li ◽  
Andy Fenton ◽  
Lee Kettley ◽  
Phillip Roberts ◽  
David J. S. Montagnes

We propose that delayed predator–prey models may provide superficially acceptable predictions for spurious reasons. Through experimentation and modelling, we offer a new approach: using a model experimental predator–prey system (the ciliates Didinium and Paramecium ), we determine the influence of past-prey abundance at a fixed delay (approx. one generation) on both functional and numerical responses (i.e. the influence of present : past-prey abundance on ingestion and growth, respectively). We reveal a nonlinear influence of past-prey abundance on both responses, with the two responding differently. Including these responses in a model indicated that delay in the numerical response drives population oscillations, supporting the accepted (but untested) notion that reproduction, not feeding, is highly dependent on the past. We next indicate how delays impact short- and long-term population dynamics. Critically, we show that although superficially the standard (parsimonious) approach to modelling can reasonably fit independently obtained time-series data, it does so by relying on biologically unrealistic parameter values. By contrast, including our fully parametrized delayed density dependence provides a better fit, offering insights into underlying mechanisms. We therefore present a new approach to explore time-series data and a revised framework for further theoretical studies.


Sign in / Sign up

Export Citation Format

Share Document