Supervised Temporal Autoencoder for Stock Return Time-series Forecasting

Author(s):  
Steven Y. K. Wong ◽  
Jennifer S. K. Chan ◽  
Lamiae Azizi ◽  
Richard Y. D. Xu
2016 ◽  
Vol 17 (4) ◽  
pp. 456-472 ◽  
Author(s):  
Ourania Theodosiadou ◽  
Vassilis Polimenis ◽  
George Tsaklidis

Purpose This paper aims to present the results of further investigating the Polimenis (2012) stochastic model, which aims to decompose the stock return evolution into positive and negative jumps, and a Brownian noise (white noise), by taking into account different noise levels. This paper provides a sensitivity analysis of the model (through the analysis of its parameters) and applies this analysis to Google and Yahoo returns during the periods 2006-2008 and 2008-2010, by means of the third central moment of Nasdaq index. Moreover, the paper studies the behavior of the calibrated jump sensitivities of a single stock as market skew changes. Finally, simulations are provided for the estimation of the jump betas coefficients, assuming that the jumps follow Gamma distributions. Design/methodology/approach In the present paper, the model proposed in Polimenis (2012) is considered and further investigated. The sensitivity of the parameters for the Google and Yahoo stock during 2006-2008 estimated by means of the third (central) moment of Nasdaq index is examined, and consequently, the calibration of the model to the returns is studied. The associated robustness is examined also for the period 2008-2010. A similar sensitivity analysis has been studied in Polimenis and Papantonis (2014), but unlike the latter reference, where the analysis is done while market skew is kept constant with an emphasis in jointly estimating jump sensitivities for many stocks, here, the authors study the behavior of the calibrated jump sensitivities of a single stock as market skew changes. Finally, simulations are taken place for the estimation of the jump betas coefficients, assuming that the jumps follow Gamma distributions. Findings A sensitivity analysis of the model proposed in Polimenis (2012) is illustrated above. In Section 2, the paper ascertains the sensitivity of the calibrated parameters related to Google and Yahoo returns, as it varies the third (central) market moment. The authors demonstrate the limits of the third moment of the stock and its mixed third moment with the market so as to get real solutions from (S1). In addition, the authors conclude that (S1) cannot have real solutions in the case where the stock return time series appears to have highly positive third moment, while the third moment of the market is significantly negative. Generally, the positive value of the third moment of the stock combined with the negative value of the third moment of the market can only be explained by assuming an adequate degree of asymmetry of the values of the beta coefficients. In such situations, the model may be expanded to include a correction for idiosyncratic third moment in the fourth equation of (S1). Finally, in Section 4, it is noticed that the distribution of the error estimation of the coefficients cannot be considered to be normal, and the variance of these errors increases as the variance of the noise increases. Originality/value As mentioned in the Findings, the paper demonstrates the limits of the third moment of the stock and its mixed third moment with the market so as to get real solutions from the main system of equations (S1). It is concluded that (S1) cannot have real solutions when the stock return time series appears to have highly positive third moment, while the third moment of the market is significantly negative. Generally, the positive value of the third moment of the stock combined with the negative value of the third moment of the market can only be explained by assuming an adequate degree of asymmetry of the values of the beta coefficients. In such situations, the model proposed should be expanded to include a correction for idiosyncratic third moment in the fourth equation of (S1). Finally, it is noticed that the distribution of the error estimation of the coefficients cannot be considered to be normal, and the variance of these errors increases as the variance of the noise increases.


2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 455 ◽  
Author(s):  
Hongjun Guan ◽  
Zongli Dai ◽  
Shuang Guan ◽  
Aiwu Zhao

In time series forecasting, information presentation directly affects prediction efficiency. Most existing time series forecasting models follow logical rules according to the relationships between neighboring states, without considering the inconsistency of fluctuations for a related period. In this paper, we propose a new perspective to study the problem of prediction, in which inconsistency is quantified and regarded as a key characteristic of prediction rules. First, a time series is converted to a fluctuation time series by comparing each of the current data with corresponding previous data. Then, the upward trend of each of fluctuation data is mapped to the truth-membership of a neutrosophic set, while a falsity-membership is used for the downward trend. Information entropy of high-order fluctuation time series is introduced to describe the inconsistency of historical fluctuations and is mapped to the indeterminacy-membership of the neutrosophic set. Finally, an existing similarity measurement method for the neutrosophic set is introduced to find similar states during the forecasting stage. Then, a weighted arithmetic averaging (WAA) aggregation operator is introduced to obtain the forecasting result according to the corresponding similarity. Compared to existing forecasting models, the neutrosophic forecasting model based on information entropy (NFM-IE) can represent both fluctuation trend and fluctuation consistency information. In order to test its performance, we used the proposed model to forecast some realistic time series, such as the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX), the Shanghai Stock Exchange Composite Index (SHSECI), and the Hang Seng Index (HSI). The experimental results show that the proposed model can stably predict for different datasets. Simultaneously, comparing the prediction error to other approaches proves that the model has outstanding prediction accuracy and universality.


Sign in / Sign up

Export Citation Format

Share Document