scholarly journals Modeling Analysis and Comparision of Neural Network Simulation Based on ECM and LSTM

2021 ◽  
Vol 2068 (1) ◽  
pp. 012041
Author(s):  
Lingyun Duan ◽  
Ziyuan Liu ◽  
Wen Yu ◽  
Wei Chen ◽  
Dongyan Jin ◽  
...  

Abstract Comparing the prediction effects of traditional econometric algorithm model and deep learning algorithm model, taking regional GDP as an example, two prediction models of ARMA-ECM and LSTM-SVR are established for prediction, and the prediction results of different models are compared and analyzed. The results show that there are some deviations in the prediction results of the two models, but the prediction trends are the same. The prediction accuracy of LSTM-SVR model will decrease significantly with the reduction of time series data samples, while ARMA-ECM model is not so sensitive.

2020 ◽  
Vol 9 (2) ◽  
pp. 135-142
Author(s):  
Di Mokhammad Hakim Ilmawan ◽  
Budi Warsito ◽  
Sugito Sugito

Bitcoin is one of digital assets that can be used to make a profit. One of the ways to use Bitcoin profitly is to trade Bitcoin. At trade activities, decisions making whether to buy or not are very crucial. If we can predict the price of Bitcoin in the future period, we can make a decisions whether to buy Bitcoin or not. Artificial Neural Network can be used to predict Bitcoin price data which is time series data. There are many learning algorithm in Artificial Neural Network, Modified Artificial Bee Colony is one of optimization algorithm that used to solve the optimal weight of Artificial Neural Network. In this study, the Bitcoin exchage rate against Rupiah starting September 1, 2017 to January 4, 2019 are used. Based on the training results obtained that MAPE value is 3,12% and the testing results obtained that MAPE value is 2,02%. This represent that the prediction results from Artificial Neural Network optimized by Modified Artificial Bee Colony algorithm are quite accurate because of small MAPE value.


Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 99
Author(s):  
Sultan Daud Khan ◽  
Louai Alarabi ◽  
Saleh Basalamah

COVID-19 caused the largest economic recession in the history by placing more than one third of world’s population in lockdown. The prolonged restrictions on economic and business activities caused huge economic turmoil that significantly affected the financial markets. To ease the growing pressure on the economy, scientists proposed intermittent lockdowns commonly known as “smart lockdowns”. Under smart lockdown, areas that contain infected clusters of population, namely hotspots, are placed on lockdown, while economic activities are allowed to operate in un-infected areas. In this study, we proposed a novel deep learning prediction framework for the accurate prediction of hotpots. We exploit the benefits of two deep learning models, i.e., Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) and propose a hybrid framework that has the ability to extract multi time-scale features from convolutional layers of CNN. The multi time-scale features are then concatenated and provide as input to 2-layers LSTM model. The LSTM model identifies short, medium and long-term dependencies by learning the representation of time-series data. We perform a series of experiments and compare the proposed framework with other state-of-the-art statistical and machine learning based prediction models. From the experimental results, we demonstrate that the proposed framework beats other existing methods with a clear margin.


2019 ◽  
Vol 16 (10) ◽  
pp. 4059-4063
Author(s):  
Ge Li ◽  
Hu Jing ◽  
Chen Guangsheng

Based on the consideration of complementary advantages, different wavelet, fractal and statistical methods are integrated to complete the classification feature extraction of time series. Combined with the advantage of process neural networks that processing time-varying information, we propose a fusion classifier with process neural network oriented time series. Be taking advantage of the multi-fractal processing nonlinear feature of time series data classification, the strong adaptability of the wavelet technique for time series data and the effect of statistical features on the classification of time series data, we can achieve the classification feature extraction of time series. Additionally, using time-varying input characteristics of process neural networks, the pattern matching of timevarying input information and space-time aggregation operation is realized. The feature extraction of time series with the above three methods is fused to the distance calculation between time-varying inputs and cluster space in process neural networks. We provide the process neural network fusion to the learning algorithm and optimize the calculation process of the time series classifier. Finally, we report the performance of our classification method using Synthetic Control Charts data from the UCI dataset and illustrate the advantage and validity of the proposed method.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yao Li

Faults occurring in the production line can cause many losses. Predicting the fault events before they occur or identifying the causes can effectively reduce such losses. A modern production line can provide enough data to solve the problem. However, in the face of complex industrial processes, this problem will become very difficult depending on traditional methods. In this paper, we propose a new approach based on a deep learning (DL) algorithm to solve the problem. First, we regard these process data as a spatial sequence according to the production process, which is different from traditional time series data. Second, we improve the long short-term memory (LSTM) neural network in an encoder-decoder model to adapt to the branch structure, corresponding to the spatial sequence. Meanwhile, an attention mechanism (AM) algorithm is used in fault detection and cause identification. Third, instead of traditional biclassification, the output is defined as a sequence of fault types. The approach proposed in this article has two advantages. On the one hand, treating data as a spatial sequence rather than a time sequence can overcome multidimensional problems and improve prediction accuracy. On the other hand, in the trained neural network, the weight vectors generated by the AM algorithm can represent the correlation between faults and the input data. This correlation can help engineers identify the cause of faults. The proposed approach is compared with some well-developed fault diagnosing methods in the Tennessee Eastman process. Experimental results show that the approach has higher prediction accuracy, and the weight vector can accurately label the factors that cause faults.


2021 ◽  
Vol 10 (2) ◽  
pp. 870-878
Author(s):  
Zainuddin Z. ◽  
P. Akhir E. A. ◽  
Hasan M. H.

Time series data often involves big size environment that lead to high dimensionality problem. Many industries are generating time series data that continuously update each second. The arising of machine learning may help in managing the data. It can forecast future instance while handling large data issues. Forecasting is related to predicting task of an upcoming event to avoid any circumstances happen in current environment. It helps those sectors such as production to foresee the state of machine in line with saving the cost from sudden breakdown as unplanned machine failure can disrupt the operation and loss up to millions. Thus, this paper offers a deep learning algorithm named recurrent neural network-gated recurrent unit (RNN-GRU) to forecast the state of machines producing the time series data in an oil and gas sector. RNN-GRU is an affiliation of recurrent neural network (RNN) that can control consecutive data due to the existence of update and reset gates. The gates decided on the necessary information to be kept in the memory. RNN-GRU is a simpler structure of long short-term memory (RNN-LSTM) with 87% of accuracy on prediction.


Author(s):  
Takeru Aoki ◽  
◽  
Keiki Takadama ◽  
Hiroyuki Sato

The cortical learning algorithm (CLA) is a time-series data prediction method that is designed based on the human neocortex. The CLA has multiple columns that are associated with the input data bits by synapses. The input data is then converted into an internal column representation based on the synapse relation. Because the synapse relation between the columns and input data bits is fixed during the entire prediction process in the conventional CLA, it cannot adapt to input data biases. Consequently, columns not used for internal representations arise, resulting in a low prediction accuracy in the conventional CLA. To improve the prediction accuracy of the CLA, we propose a CLA that self-adaptively arranges the column synapses according to the input data tendencies and verify its effectiveness with several artificial time-series data and real-world electricity load prediction data from New York City. Experimental results show that the proposed CLA achieves higher prediction accuracy than the conventional CLA and LSTMs with different network optimization algorithms by arranging column synapses according to the input data tendency.


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 876 ◽  
Author(s):  
Renzhuo Wan ◽  
Shuping Mei ◽  
Jun Wang ◽  
Min Liu ◽  
Fan Yang

Multivariable time series prediction has been widely studied in power energy, aerology, meteorology, finance, transportation, etc. Traditional modeling methods have complex patterns and are inefficient to capture long-term multivariate dependencies of data for desired forecasting accuracy. To address such concerns, various deep learning models based on Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) methods are proposed. To improve the prediction accuracy and minimize the multivariate time series data dependence for aperiodic data, in this article, Beijing PM2.5 and ISO-NE Dataset are analyzed by a novel Multivariate Temporal Convolution Network (M-TCN) model. In this model, multi-variable time series prediction is constructed as a sequence-to-sequence scenario for non-periodic datasets. The multichannel residual blocks in parallel with asymmetric structure based on deep convolution neural network is proposed. The results are compared with rich competitive algorithms of long short term memory (LSTM), convolutional LSTM (ConvLSTM), Temporal Convolution Network (TCN) and Multivariate Attention LSTM-FCN (MALSTM-FCN), which indicate significant improvement of prediction accuracy, robust and generalization of our model.


Information ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 442
Author(s):  
Seol-Hyun Noh

A recurrent neural network (RNN) combines variable-length input data with a hidden state that depends on previous time steps to generate output data. RNNs have been widely used in time-series data analysis, and various RNN algorithms have been proposed, such as the standard RNN, long short-term memory (LSTM), and gated recurrent units (GRUs). In particular, it has been experimentally proven that LSTM and GRU have higher validation accuracy and prediction accuracy than the standard RNN. The learning ability is a measure of the effectiveness of gradient of error information that would be backpropagated. This study provided a theoretical and experimental basis for the result that LSTM and GRU have more efficient gradient descent than the standard RNN by analyzing and experimenting the gradient vanishing of the standard RNN, LSTM, and GRU. As a result, LSTM and GRU are robust to the degradation of gradient descent even when LSTM and GRU learn long-range input data, which means that the learning ability of LSTM and GRU is greater than standard RNN when learning long-range input data. Therefore, LSTM and GRU have higher validation accuracy and prediction accuracy than the standard RNN. In addition, it was verified whether the experimental results of river-level prediction models, solar power generation prediction models, and speech signal models using the standard RNN, LSTM, and GRUs are consistent with the analysis results of gradient vanishing.


A new open box and nonlinear model of cosine and sigmoid higher order neural network (CS-HONN) is presented in this chapter. A new learning algorithm for CS-HONN is also developed in this chapter. In addition, a time series data simulation and analysis system, CS-HONN simulator, is built based on the CS-HONN models. Test results show that the average error of CS-HONN models are from 2.3436% to 4.6857%, and the average error of polynomial higher order neural network (PHONN), trigonometric higher order neural network (THONN), and sigmoid polynomial higher order neural network (SPHONN) models range from 2.8128% to 4.9077%. This suggests that CS-HONN models are 0.1174% to 0.4917% better than PHONN, THONN, and SPHONN models.


Author(s):  
Ming Zhang

New open box and nonlinear model of Cosine and Sigmoid Higher Order Neural Network (CS-HONN) is presented in this paper. A new learning algorithm for CS-HONN is also developed from this study. A time series data simulation and analysis system, CS-HONN Simulator, is built based on the CS-HONN models too. Test results show that average error of CS-HONN models are from 2.3436% to 4.6857%, and the average error of Polynomial Higher Order Neural Network (PHONN), Trigonometric Higher Order Neural Network (THONN), and Sigmoid polynomial Higher Order Neural Network (SPHONN) models are from 2.8128% to 4.9077%. It means that CS-HONN models are 0.1174% to 0.4917% better than PHONN, THONN, and SPHONN models.


Sign in / Sign up

Export Citation Format

Share Document