Analysis of Precipitation Variability using Memory Based Artificial Neural Networks

2022 ◽  
pp. 955-970
Author(s):  
Shyama Debbarma ◽  
Parthasarathi Choudhury ◽  
Parthajit Roy ◽  
Ram Kumar

This article analyzes the variability in precipitation of the Barak river basin using memory-based ANN models called Gamma Memory Neural Network(GMNN) and genetically optimized GMNN called GMNN-GA for precipitation downscaling precipitation. GMNN having adaptive memory depth is capable techniques in modeling time varying inputs with unknown input characteristics, while an integration of the model with GA can further improve its performances. NCEP reanalysis and HadCM3A2 (a) scenario data are used for downscaling and forecasting precipitation series for Barak river basin. Model performances are analyzed by using statistical criteria, RMSE and mean error and are compared with the standard SDSM model. Results obtained by using 24 years of daily data sets show that GMNN-GA is efficient in downscaling daily precipitation series with maximum daily annual mean error of 6.78%. The outcomes of the study demonstrate that execution of the GMNN-GA model is superior to the GMNN and similar with that of the standard SDSM.

2019 ◽  
Vol 10 (1) ◽  
pp. 29-42
Author(s):  
Shyama Debbarma ◽  
Parthasarathi Choudhury ◽  
Parthajit Roy ◽  
Ram Kumar

This article analyzes the variability in precipitation of the Barak river basin using memory-based ANN models called Gamma Memory Neural Network(GMNN) and genetically optimized GMNN called GMNN-GA for precipitation downscaling precipitation. GMNN having adaptive memory depth is capable techniques in modeling time varying inputs with unknown input characteristics, while an integration of the model with GA can further improve its performances. NCEP reanalysis and HadCM3A2 (a) scenario data are used for downscaling and forecasting precipitation series for Barak river basin. Model performances are analyzed by using statistical criteria, RMSE and mean error and are compared with the standard SDSM model. Results obtained by using 24 years of daily data sets show that GMNN-GA is efficient in downscaling daily precipitation series with maximum daily annual mean error of 6.78%. The outcomes of the study demonstrate that execution of the GMNN-GA model is superior to the GMNN and similar with that of the standard SDSM.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 484
Author(s):  
Claudiu Vințe ◽  
Marcel Ausloos ◽  
Titus Felix Furtună

Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute for estimating the volatility of stock market indices. Diverging from the widely used volatility models that take into account only the elements related to the traded prices, namely the open, high, low, and close prices of a trading day (OHLC), the intrinsic entropy model takes into account the traded volumes during the considered time frame as well. We adjust the intraday intrinsic entropy model that we introduced earlier for exchange-traded securities in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropic probability or market credence assigned to the corresponding price level. The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Nikkei 225, and Hang Seng Index). We compare the results produced by the intrinsic entropy model with the volatility estimates obtained for the same data sets using widely employed industry volatility estimators. The intrinsic entropy model proves to consistently deliver reliable estimates for various time frames while showing peculiarly high values for the coefficient of variation, with the estimates falling in a significantly lower interval range compared with those provided by the other advanced volatility estimators.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Serdar Neslihanoglu

AbstractThis research investigates the appropriateness of the linear specification of the market model for modeling and forecasting the cryptocurrency prices during the pre-COVID-19 and COVID-19 periods. Two extensions are offered to compare the performance of the linear specification of the market model (LMM), which allows for the measurement of the cryptocurrency price beta risk. The first is the generalized additive model, which permits flexibility in the rigid shape of the linearity of the LMM. The second is the time-varying linearity specification of the LMM (Tv-LMM), which is based on the state space model form via the Kalman filter, allowing for the measurement of the time-varying beta risk of the cryptocurrency price. The analysis is performed using daily data from both time periods on the top 10 cryptocurrencies by adjusted market capitalization, using the Crypto Currency Index 30 (CCI30) as a market proxy and 1-day and 7-day forward predictions. Such a comparison of cryptocurrency prices has yet to be undertaken in the literature. The empirical findings favor the Tv-LMM, which outperforms the others in terms of modeling and forecasting performance. This result suggests that the relationship between each cryptocurrency price and the CCI30 index should be locally instead of globally linear, especially during the COVID-19 period.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Begüm Yurteri Kösedağlı ◽  
Gül Huyugüzel Kışla ◽  
A. Nazif Çatık

AbstractThis study analyzes oil price exposure of the oil–gas sector stock returns for the fragile five countries based on a multi-factor asset pricing model using daily data from 29 May 1996 to 27 January 2020. The endogenous structural break test suggests the presence of serious parameter instabilities due to fluctuations in the oil and stock markets over the period under study. Moreover, the time-varying estimates indicate that the oil–gas sectors of these countries are riskier than the overall stock market. The results further suggest that, except for Indonesia, oil prices have a positive impact on the sectoral returns of all markets, whereas the impact of the exchange rates on the oil–gas sector returns varies across time and countries.


2019 ◽  
Vol 19 (1) ◽  
pp. 3-23
Author(s):  
Aurea Soriano-Vargas ◽  
Bernd Hamann ◽  
Maria Cristina F de Oliveira

We present an integrated interactive framework for the visual analysis of time-varying multivariate data sets. As part of our research, we performed in-depth studies concerning the applicability of visualization techniques to obtain valuable insights. We consolidated the considered analysis and visualization methods in one framework, called TV-MV Analytics. TV-MV Analytics effectively combines visualization and data mining algorithms providing the following capabilities: (1) visual exploration of multivariate data at different temporal scales, and (2) a hierarchical small multiples visualization combined with interactive clustering and multidimensional projection to detect temporal relationships in the data. We demonstrate the value of our framework for specific scenarios, by studying three use cases that were validated and discussed with domain experts.


2021 ◽  
Vol 37 (4) ◽  
pp. 631-643
Author(s):  
Tayyaba Yousaf ◽  
Sadia Farooq ◽  
Ahmed Muneeb Mehta

Purpose The purpose of this study is to investigate whether the STOXX Europe Christian price index (SECI) follows the premise of efficient market hypothesis (EMH). Design/methodology/approach The study used daily data of SECI for the period of 15 years as its launch date i.e. 31 December 2004 to 31 December 2019. Data are analyzed by taking a full-length sample and fixed-length subsample. For subsample, the data are divided into five subsamples of three years each. Subsample analysis is important for analyzing time varying efficiency of the series, as the market is said to follow EMH if it is being efficient throughout the sample. Both type of samples is examined through linear tests including autocorrelations test and variance ratio (VR) test. Findings Tests applied conclude that SECI is weak-form efficient, which means that the prices of the index include all the relevant past information and immediately react to new information. Hence, the investors cannot earn abnormal returns. Originality/value Religion-based indices grasped the attention of investors, policymakers and academic researchers because of increased concern over ethics in business. Though the impact of religion on the economy have been studied in many ways but the efficiency of religion-based indices have been less explored. The current study is primary in its nature as it analysis the efficiency of SECI. This index is important to explore because Christianity is the world’s top religion with 2.3 billion followers around the globe.


Author(s):  
Abou_el_ela Abdou Hussein

Day by day advanced web technologies have led to tremendous growth amount of daily data generated volumes. This mountain of huge and spread data sets leads to phenomenon that called big data which is a collection of massive, heterogeneous, unstructured, enormous and complex data sets. Big Data life cycle could be represented as, Collecting (capture), storing, distribute, manipulating, interpreting, analyzing, investigate and visualizing big data. Traditional techniques as Relational Database Management System (RDBMS) couldn’t handle big data because it has its own limitations, so Advancement in computing architecture is required to handle both the data storage requisites and the weighty processing needed to analyze huge volumes and variety of data economically. There are many technologies manipulating a big data, one of them is hadoop. Hadoop could be understand as an open source spread data processing that is one of the prominent and well known solutions to overcome handling big data problem. Apache Hadoop was based on Google File System and Map Reduce programming paradigm. Through this paper we dived to search for all big data characteristics starting from first three V's that have been extended during time through researches to be more than fifty six V's and making comparisons between researchers to reach to best representation and the precise clarification of all big data V’s characteristics. We highlight the challenges that face big data processing and how to overcome these challenges using Hadoop and its use in processing big data sets as a solution for resolving various problems in a distributed cloud based environment. This paper mainly focuses on different components of hadoop like Hive, Pig, and Hbase, etc. Also we institutes absolute description of Hadoop Pros and cons and improvements to face hadoop problems by choosing proposed Cost-efficient Scheduler Algorithm for heterogeneous Hadoop system.


2021 ◽  
Author(s):  
Beatrix Izsák ◽  
Mónika Lakatos ◽  
Rita Pongrácz ◽  
Tamás Szentimrey ◽  
Olivér Szentes

<p>Climate studies, in particular those related to climate change, require long, high-quality, controlled data sets that are representative both spatially and temporally. Changing the conditions in which the measurements were taken, for example relocating the station, or a change in the frequency and time of measurements, or in the instruments used may result in an fractured time series. To avoid these problems, data errors and inhomogeneities are eliminated for Hungary and data gaps are filled in by using the MASH (Multiple Analysis of Series for Homogenization, Szentimrey) homogenization procedure. Homogenization of the data series raises the problem that how to homogenize long and short data series together within the same process, since the meteorological observation network was upgraded significantly in the last decades. It is possible to solve these problems with the method MASH due to its adequate mathematical principles for such purposes. The solution includes the synchronization of the common parts’ inhomogeneities within three (or more) different MASH processing of the three (or more) datasets with different lengths. Then, the homogenized station data series are interpolated to the whole area of Hungary, to a 0.1 degree regular grid. For this purpose, the MISH (Meteorological Interpolation based on Surface Homogenized Data Basis; Szentimrey and Bihari) program system is used. The MISH procedure was developed specifically for the interpolation of various meteorological elements. Hungarian time series of daily average temperature and precipitation sum for the period 1870-2020 were used in this study, thus providing the longest homogenized, gridded daily data sets in the region with up-to-date information already included.</p><p><em>Supported by the ÚNKP-20-3 New National Excellence Program of the Ministry for Innovation andTechnology from the source of the National Research, Development and Innovation Fund.</em></p>


Sign in / Sign up

Export Citation Format

Share Document