Analysis of Precipitation Variability using Memory Based Artificial Neural Networks

2019 ◽  
Vol 10 (1) ◽  
pp. 29-42
Author(s):  
Shyama Debbarma ◽  
Parthasarathi Choudhury ◽  
Parthajit Roy ◽  
Ram Kumar

This article analyzes the variability in precipitation of the Barak river basin using memory-based ANN models called Gamma Memory Neural Network(GMNN) and genetically optimized GMNN called GMNN-GA for precipitation downscaling precipitation. GMNN having adaptive memory depth is capable techniques in modeling time varying inputs with unknown input characteristics, while an integration of the model with GA can further improve its performances. NCEP reanalysis and HadCM3A2 (a) scenario data are used for downscaling and forecasting precipitation series for Barak river basin. Model performances are analyzed by using statistical criteria, RMSE and mean error and are compared with the standard SDSM model. Results obtained by using 24 years of daily data sets show that GMNN-GA is efficient in downscaling daily precipitation series with maximum daily annual mean error of 6.78%. The outcomes of the study demonstrate that execution of the GMNN-GA model is superior to the GMNN and similar with that of the standard SDSM.

2022 ◽  
pp. 955-970
Author(s):  
Shyama Debbarma ◽  
Parthasarathi Choudhury ◽  
Parthajit Roy ◽  
Ram Kumar

This article analyzes the variability in precipitation of the Barak river basin using memory-based ANN models called Gamma Memory Neural Network(GMNN) and genetically optimized GMNN called GMNN-GA for precipitation downscaling precipitation. GMNN having adaptive memory depth is capable techniques in modeling time varying inputs with unknown input characteristics, while an integration of the model with GA can further improve its performances. NCEP reanalysis and HadCM3A2 (a) scenario data are used for downscaling and forecasting precipitation series for Barak river basin. Model performances are analyzed by using statistical criteria, RMSE and mean error and are compared with the standard SDSM model. Results obtained by using 24 years of daily data sets show that GMNN-GA is efficient in downscaling daily precipitation series with maximum daily annual mean error of 6.78%. The outcomes of the study demonstrate that execution of the GMNN-GA model is superior to the GMNN and similar with that of the standard SDSM.


PAMM ◽  
2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Gerson C. Kroiz ◽  
Reetam Majumder ◽  
Matthias K. Gobbert ◽  
Nagaraj K. Neerchal ◽  
Kel Markert ◽  
...  

Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 484
Author(s):  
Claudiu Vințe ◽  
Marcel Ausloos ◽  
Titus Felix Furtună

Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute for estimating the volatility of stock market indices. Diverging from the widely used volatility models that take into account only the elements related to the traded prices, namely the open, high, low, and close prices of a trading day (OHLC), the intrinsic entropy model takes into account the traded volumes during the considered time frame as well. We adjust the intraday intrinsic entropy model that we introduced earlier for exchange-traded securities in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropic probability or market credence assigned to the corresponding price level. The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Nikkei 225, and Hang Seng Index). We compare the results produced by the intrinsic entropy model with the volatility estimates obtained for the same data sets using widely employed industry volatility estimators. The intrinsic entropy model proves to consistently deliver reliable estimates for various time frames while showing peculiarly high values for the coefficient of variation, with the estimates falling in a significantly lower interval range compared with those provided by the other advanced volatility estimators.


2014 ◽  
Vol 45 (5-6) ◽  
pp. 1325-1354 ◽  
Author(s):  
Emilia Paula Diaconescu ◽  
Philippe Gachon ◽  
John Scinocca ◽  
René Laprise

2007 ◽  
Vol 4 (5) ◽  
pp. 3413-3440 ◽  
Author(s):  
E. P. Maurer ◽  
H. G. Hidalgo

Abstract. Downscaling of climate model data is essential to most impact analysis. We compare two methods of statistical downscaling to produce continuous, gridded time series of precipitation and surface air temperature at a 1/8-degree (approximately 140 km² per grid cell) resolution over the western U.S. We use NCEP/NCAR Reanalysis data from 1950–1999 as a surrogate General Circulation Model (GCM). The two methods included are constructed analogues (CA) and a bias correction and spatial downscaling (BCSD), both of which have been shown to be skillful in different settings, and BCSD has been used extensively in hydrologic impact analysis. Both methods use the coarse scale Reanalysis fields of precipitation and temperature as predictors of the corresponding fine scale fields. CA downscales daily large-scale data directly and BCSD downscales monthly data, with a random resampling technique to generate daily values. The methods produce comparable skill in producing downscaled, gridded fields of precipitation and temperatures at a monthly and seasonal level. For daily precipitation, both methods exhibit some skill in reproducing both observed wet and dry extremes and the difference between the methods is not significant, reflecting the general low skill in daily precipitation variability in the reanalysis data. For low temperature extremes, the CA method produces greater downscaling skill than BCSD for fall and winter seasons. For high temperature extremes, CA demonstrates higher skill than BCSD in summer. We find that the choice of most appropriate downscaling technique depends on the variables, seasons, and regions of interest, on the availability of daily data, and whether the day to day correspondence of weather from the GCM needs to be reproduced for some applications. The ability to produce skillful downscaled daily data depends primarily on the ability of the climate model to show daily skill.


Author(s):  
Abou_el_ela Abdou Hussein

Day by day advanced web technologies have led to tremendous growth amount of daily data generated volumes. This mountain of huge and spread data sets leads to phenomenon that called big data which is a collection of massive, heterogeneous, unstructured, enormous and complex data sets. Big Data life cycle could be represented as, Collecting (capture), storing, distribute, manipulating, interpreting, analyzing, investigate and visualizing big data. Traditional techniques as Relational Database Management System (RDBMS) couldn’t handle big data because it has its own limitations, so Advancement in computing architecture is required to handle both the data storage requisites and the weighty processing needed to analyze huge volumes and variety of data economically. There are many technologies manipulating a big data, one of them is hadoop. Hadoop could be understand as an open source spread data processing that is one of the prominent and well known solutions to overcome handling big data problem. Apache Hadoop was based on Google File System and Map Reduce programming paradigm. Through this paper we dived to search for all big data characteristics starting from first three V's that have been extended during time through researches to be more than fifty six V's and making comparisons between researchers to reach to best representation and the precise clarification of all big data V’s characteristics. We highlight the challenges that face big data processing and how to overcome these challenges using Hadoop and its use in processing big data sets as a solution for resolving various problems in a distributed cloud based environment. This paper mainly focuses on different components of hadoop like Hive, Pig, and Hbase, etc. Also we institutes absolute description of Hadoop Pros and cons and improvements to face hadoop problems by choosing proposed Cost-efficient Scheduler Algorithm for heterogeneous Hadoop system.


Author(s):  
František Pavlík ◽  
Miroslav Dumbrovský

In a survey of landscape retention capability results of measurements obtained during the disastrous flood in June 2009 were used. The original method based on the balance among the daily precipitation fallen on the basin with discharges in the final profile was used on the analogy with transformation of the flood discharge through a reservoir. Following basin retention are defined: dynamic Rd, static Rs including the underground retention Rug and evaporation E, and total Rt. Main principal criteria were the effective static retention of the basin Rsef and a coefficient of the effective static basin retention ρsef (3). The coefficient of reducing flood culmination λcul (4) was calculated, too. Also investigated factors having the most influence on a retention capacity of a basin are introduced. Summary of results are shown in the Tab. I. Values of the most important criterion quantities are marked in shadow colour. The results show, for example, that the found out coefficient ρsef is 0.52. It means that the soil (and slightly a vapour, too) in the basin caught 52% of volume of wave in the time of culmination discharge in a basin. Also some further interested findings are introduced in the results and conclusions.


2021 ◽  
Author(s):  
Beatrix Izsák ◽  
Mónika Lakatos ◽  
Rita Pongrácz ◽  
Tamás Szentimrey ◽  
Olivér Szentes

<p>Climate studies, in particular those related to climate change, require long, high-quality, controlled data sets that are representative both spatially and temporally. Changing the conditions in which the measurements were taken, for example relocating the station, or a change in the frequency and time of measurements, or in the instruments used may result in an fractured time series. To avoid these problems, data errors and inhomogeneities are eliminated for Hungary and data gaps are filled in by using the MASH (Multiple Analysis of Series for Homogenization, Szentimrey) homogenization procedure. Homogenization of the data series raises the problem that how to homogenize long and short data series together within the same process, since the meteorological observation network was upgraded significantly in the last decades. It is possible to solve these problems with the method MASH due to its adequate mathematical principles for such purposes. The solution includes the synchronization of the common parts’ inhomogeneities within three (or more) different MASH processing of the three (or more) datasets with different lengths. Then, the homogenized station data series are interpolated to the whole area of Hungary, to a 0.1 degree regular grid. For this purpose, the MISH (Meteorological Interpolation based on Surface Homogenized Data Basis; Szentimrey and Bihari) program system is used. The MISH procedure was developed specifically for the interpolation of various meteorological elements. Hungarian time series of daily average temperature and precipitation sum for the period 1870-2020 were used in this study, thus providing the longest homogenized, gridded daily data sets in the region with up-to-date information already included.</p><p><em>Supported by the ÚNKP-20-3 New National Excellence Program of the Ministry for Innovation andTechnology from the source of the National Research, Development and Innovation Fund.</em></p>


Sign in / Sign up

Export Citation Format

Share Document