scholarly journals Forecasting Variation Trends of Stocks via Multiscale Feature Fusion and Long Short-Term Memory Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yezhen Liu ◽  
Xilong Yu ◽  
Yanhua Wu ◽  
Shuhong Song

Forecasting stock price trends accurately appears a huge challenge because the environment of stock markets is extremely stochastic and complicated. This challenge persistently motivates us to seek reliable pathways to guide stock trading. While the Long Short-Term Memory (LSTM) network has the dedicated gate structure quite suitable for the prediction based on contextual features, we propose a novel LSTM-based model. Also, we devise a multiscale convolutional feature fusion mechanism for the model to extensively exploit the contextual relationships hidden in consecutive time steps. The significance of our designed scheme is twofold. (1) Benefiting from the gate structure designed for both long- and short-term memories, our model can use the given stock history data more adaptively than traditional models, which greatly guarantees the prediction performance in financial time series (FTS) scenarios and thus profits the prediction of stock trends. (2) The multiscale convolutional feature fusion mechanism can diversify the feature representation and more extensively capture the FTS feature essence than traditional models, which fairly facilitates the generalizability. Empirical studies conducted on three classic stock history data sets, i.e., S&P 500, DJIA, and VIX, demonstrated the effectiveness and stability superiority of the suggested method against a few state-of-the-art models using multiple validity indices. For example, our method achieved the highest average directional accuracy (around 0.71) on the three employed stock data sets.

2018 ◽  
Vol 10 (12) ◽  
pp. 168781401881718 ◽  
Author(s):  
Wentao Mao ◽  
Jianliang He ◽  
Jiamei Tang ◽  
Yuan Li

For bearing remaining useful life prediction problem, the traditional machine-learning-based methods are generally short of feature representation ability and incapable of adaptive feature extraction. Although deep-learning-based remaining useful life prediction methods proposed in recent years can effectively extract discriminative features for bearing fault, these methods tend to less consider temporal information of fault degradation process. To solve this problem, a new remaining useful life prediction approach based on deep feature representation and long short-term memory neural network is proposed in this article. First, a new criterion, named support vector data normalized correlation coefficient, is proposed to automatically divide the whole bearing life as normal state and fast degradation state. Second, deep features of bearing fault with good representation ability can be obtained from convolutional neural network by means of the marginal spectrum in Hilbert–Huang transform of raw vibration signals and health state label. Finally, by considering the temporal information of degradation process, these features are fed into a long short-term memory neural network to construct a remaining useful life prediction model. Experiments are conducted on bearing data sets of IEEE PHM Challenge 2012. The results show the significance of performance improvement of the proposed method in terms of predictive accuracy and numerical stability.


Water ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 1387 ◽  
Author(s):  
Le ◽  
Ho ◽  
Lee ◽  
Jung

Flood forecasting is an essential requirement in integrated water resource management. This paper suggests a Long Short-Term Memory (LSTM) neural network model for flood forecasting, where the daily discharge and rainfall were used as input data. Moreover, characteristics of the data sets which may influence the model performance were also of interest. As a result, the Da River basin in Vietnam was chosen and two different combinations of input data sets from before 1985 (when the Hoa Binh dam was built) were used for one-day, two-day, and three-day flowrate forecasting ahead at Hoa Binh Station. The predictive ability of the model is quite impressive: The Nash–Sutcliffe efficiency (NSE) reached 99%, 95%, and 87% corresponding to three forecasting cases, respectively. The findings of this study suggest a viable option for flood forecasting on the Da River in Vietnam, where the river basin stretches between many countries and downstream flows (Vietnam) may fluctuate suddenly due to flood discharge from upstream hydroelectric reservoirs.


2020 ◽  
Vol 224 (1) ◽  
pp. 669-681
Author(s):  
Sihong Wu ◽  
Qinghua Huang ◽  
Li Zhao

SUMMARY Late-time transient electromagnetic (TEM) data contain deep subsurface information and are important for resolving deeper electrical structures. However, due to their relatively small signal amplitudes, TEM responses later in time are often dominated by ambient noises. Therefore, noise removal is critical to the application of TEM data in imaging electrical structures at depth. De-noising techniques for TEM data have been developed rapidly in recent years. Although strong efforts have been made to improving the quality of the TEM responses, it is still a challenge to effectively extract the signals due to unpredictable and irregular noises. In this study, we develop a new type of neural network architecture by combining the long short-term memory (LSTM) network with the autoencoder structure to suppress noise in TEM signals. The resulting LSTM-autoencoders yield excellent performance on synthetic data sets including horizontal components of the electric field and vertical component of the magnetic field generated by different sources such as dipole, loop and grounded line sources. The relative errors between the de-noised data sets and the corresponding noise-free transients are below 1% for most of the sampling points. Notable improvement in the resistivity structure inversion result is achieved using the TEM data de-noised by the LSTM-autoencoder in comparison with several widely-used neural networks, especially for later-arriving signals that are important for constraining deeper structures. We demonstrate the effectiveness and general applicability of the LSTM-autoencoder by de-noising experiments using synthetic 1-D and 3-D TEM signals as well as field data sets. The field data from a fixed loop survey using multiple receivers are greatly improved after de-noising by the LSTM-autoencoder, resulting in more consistent inversion models with significantly increased exploration depth. The LSTM-autoencoder is capable of enhancing the quality of the TEM signals at later times, which enables us to better resolve deeper electrical structures.


Author(s):  
Debani Prasad Mishra ◽  
Sanhita Mishra ◽  
Rakesh Kumar Yadav ◽  
Rishabh Vishnoi ◽  
Surender Reddy Salkuti

For a power supplier, meeting demand-supply equilibrium is of utmost importance. Electrical energy must be generated according to demand, as a large amount of electrical energy cannot be stored. For the proper functioning of a power supply system, an adequate model for predicting load is a necessity. In the present world, in almost every industry, whether it be healthcare, agriculture, and consulting, growing digitization and automation is a prominent feature. As a result, large sets of data related to these industries are being generated, which when subjected to rigorous analysis, yield out-of-the-box methods to optimize the business and services offered. This paper aims to ascertain the viability of long short term memory (LSTM) neural networks, a recurrent neural network capable of handling both long-term and short-term dependencies of data sets, for predicting load that is to be met by a Dispatch Center located in a major city. The result shows appreciable accuracy in forecasting future demand.


2019 ◽  
Vol 11 (11) ◽  
pp. 237
Author(s):  
Jingren Zhang ◽  
Fang’ai Liu ◽  
Weizhi Xu ◽  
Hui Yu

Convolutional neural networks (CNN) and long short-term memory (LSTM) have gained wide recognition in the field of natural language processing. However, due to the pre- and post-dependence of natural language structure, relying solely on CNN to implement text categorization will ignore the contextual meaning of words and bidirectional long short-term memory (BiLSTM). The feature fusion model is divided into a multiple attention (MATT) CNN model and a bi-directional gated recurrent unit (BiGRU) model. The CNN model inputs the word vector (word vector attention, part of speech attention, position attention) that has been labeled by the attention mechanism into our multi-attention mechanism CNN model. Obtaining the influence intensity of the target keyword on the sentiment polarity of the sentence, and forming the first dimension of the sentiment classification, the BiGRU model replaces the original BiLSTM and extracts the global semantic features of the sentence level to form the second dimension of sentiment classification. Then, using PCA to reduce the dimension of the two-dimensional fusion vector, we finally obtain a classification result combining two dimensions of keywords and sentences. The experimental results show that the proposed MATT-CNN+BiGRU fusion model has 5.94% and 11.01% higher classification accuracy on the MRD and SemEval2016 datasets, respectively, than the mainstream CNN+BiLSTM method.


Sign in / Sign up

Export Citation Format

Share Document