scholarly journals LTR-MDTS structure - a structure for multiple dependent time series prediction

2017 ◽  
Vol 14 (2) ◽  
pp. 467-490 ◽  
Author(s):  
Predrag Pecev ◽  
Milos Rackovic

The subject of research presented in this paper is to model a neural network structure and appropriate training algorithm that is most suited for multiple dependent time series prediction / deduction. The basic idea is to take advantage of neural networks in solving the problem of prediction of synchronized basketball referees? movement during a basketball action. Presentation of time series stemming from the aforementioned problem, by using traditional Multilayered Perceptron neural networks (MLP), leads to a sort of paradox of backward time lapse effect that certain input and hidden layers nodes have on output nodes that correspond to previous moments in time. This paper describes conducted research and analysis of different methods of overcoming the presented problem. Presented paper is essentially split into two parts. First part gives insight on efforts that are put into training set configuration on standard Multi Layered Perceptron back propagation neural networks, in order to decrease backwards time lapse effects that certain input and hidden layers nodes have on output nodes. Second part of paper focuses on the results that a new neural network structure called LTR - MDTS provides. Foundation of LTR - MDTS design relies on a foundation on standard MLP neural networks with certain, left-to-right synapse removal to eliminate aforementioned backwards time lapse effect on the output nodes.

Author(s):  
Muhammad Faheem Mushtaq ◽  
Urooj Akram ◽  
Muhammad Aamir ◽  
Haseeb Ali ◽  
Muhammad Zulqarnain

It is important to predict a time series because many problems that are related to prediction such as health prediction problem, climate change prediction problem and weather prediction problem include a time component. To solve the time series prediction problem various techniques have been developed over many years to enhance the accuracy of forecasting. This paper presents a review of the prediction of physical time series applications using the neural network models. Neural Networks (NN) have appeared as an effective tool for forecasting of time series.  Moreover, to resolve the problems related to time series data, there is a need of network with single layer trainable weights that is Higher Order Neural Network (HONN) which can perform nonlinearity mapping of input-output. So, the developers are focusing on HONN that has been recently considered to develop the input representation spaces broadly. The HONN model has the ability of functional mapping which determined through some time series problems and it shows the more benefits as compared to conventional Artificial Neural Networks (ANN). The goal of this research is to present the reader awareness about HONN for physical time series prediction, to highlight some benefits and challenges using HONN.


2020 ◽  
Vol 12 (6) ◽  
pp. 21-32
Author(s):  
Muhammad Zulqarnain ◽  
◽  
Rozaida Ghazali ◽  
Muhammad Ghulam Ghouse ◽  
Yana Mazwin Mohmad Hassim ◽  
...  

Financial time-series prediction has been long and the most challenging issues in financial market analysis. The deep neural networks is one of the excellent data mining approach has received great attention by researchers in several areas of time-series prediction since last 10 years. “Convolutional neural network (CNN) and recurrent neural network (RNN) models have become the mainstream methods for financial predictions. In this paper, we proposed to combine architectures, which exploit the advantages of CNN and RNN simultaneously, for the prediction of trading signals. Our model is essentially presented to financial time series predicting signals through a CNN layer, and directly fed into a gated recurrent unit (GRU) layer to capture long-term signals dependencies. GRU model perform better in sequential learning tasks and solve the vanishing gradients and exploding issue in standard RNNs. We evaluate our model on three datasets for stock indexes of the Hang Seng Indexes (HSI), the Deutscher Aktienindex (DAX) and the S&P 500 Index range 2008 to 2016, and associate the GRU-CNN based approaches with the existing deep learning models. Experimental results present that the proposed GRU-CNN model obtained the best prediction accuracy 56.2% on HIS dataset, 56.1% on DAX dataset and 56.3% on S&P500 dataset respectively.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Lingfeng Wang

The TV show rating analysis and prediction system can collect and transmit information more quickly and quickly upload the information to the database. The convolutional neural network is a multilayer neural network structure that simulates the operating mechanism of biological vision systems. It is a neural network composed of multiple convolutional layers and downsampling layers sequentially connected. It can obtain useful feature descriptions from original data and is an effective method to extract features from data. At present, convolutional neural networks have become a research hotspot in speech recognition, image recognition and classification, natural language processing, and other fields and have been widely and successfully applied in these fields. Therefore, this paper introduces the convolutional neural network structure to predict the TV program rating data. First, it briefly introduces artificial neural networks and deep learning methods and focuses on the algorithm principles of convolutional neural networks and support vector machines. Then, we improve the convolutional neural network to fit the TV program rating data and finally apply the two prediction models to the TV program rating data prediction. We improve the convolutional neural network TV program rating prediction model and combine the advantages of the convolutional neural network to extract effective features and good classification and prediction capabilities to improve the prediction accuracy. Through simulation comparison, we verify the feasibility and effectiveness of the TV program rating prediction model given in this article.


2017 ◽  
Vol 26 (4) ◽  
pp. 625-639 ◽  
Author(s):  
Gang Wang

AbstractCurrently, most artificial neural networks (ANNs) represent relations, such as back-propagation neural network, in the manner of functional approximation. This kind of ANN is good at representing the numeric relations or ratios between things. However, for representing logical relations, these ANNs have disadvantages because their representation is in the form of ratio. Therefore, to represent logical relations directly, we propose a novel ANN model called probabilistic logical dynamical neural network (PLDNN). Inhibitory links are introduced to connect exciting links rather than neurons so as to inhibit the connected exciting links conditionally to make them represent logical relations correctly. The probabilities are assigned to the weights of links to indicate the belief degree in logical relations under uncertain situations. Moreover, the network structure of PLDNN is less limited in topology than traditional ANNs, and it is dynamically built completely according to the data to make it adaptive. PLDNN uses both the weights of links and the interconnection structure to memorize more information. The model could be applied to represent logical relations as the complement to numeric ANNs.


2009 ◽  
Vol 19 (06) ◽  
pp. 437-448 ◽  
Author(s):  
MD. ASADUZZAMAN ◽  
MD. SHAHJAHAN ◽  
KAZUYUKI MURASE

Multilayer feed-forward neural networks are widely used based on minimization of an error function. Back propagation (BP) is a famous training method used in the multilayer networks but it often suffers from the drawback of slow convergence. To make the learning faster, we propose 'Fusion of Activation Functions' (FAF) in which different conventional activation functions (AFs) are combined to compute final activation. This has not been studied extensively yet. One of the sub goals of the paper is to check the role of linear AFs in combination. We investigate whether FAF can enable the learning to be faster. Validity of the proposed method is examined by performing simulations on challenging nine real benchmark classification and time series prediction problems. The FAF has been applied to 2-bit, 3-bit and 4-bit parity, the breast cancer, Diabetes, Heart disease, Iris, wine, Glass and Soybean classification problems. The algorithm is also tested with Mackey-Glass chaotic time series prediction problem. The algorithm is shown to work better than other AFs used independently in BP such as sigmoid (SIG), arctangent (ATAN), logarithmic (LOG).


Author(s):  
Hao-Yun Chen

Traditionally, software programmers write a series of hard-coded rules to instruct a machine, step by step. However, with the ubiquity of neural networks, instead of giving specific instructions, programmers can write a skeleton of code to build a neural network structure, and then feed the machine with data sets, in order to have the machine write code by itself. Software containing the code written in this manner changes and evolves over time as new data sets are input and processed. This characteristic distinguishes it markedly from traditional software, and is partly the reason why it is referred to as ‘software 2.0’. Yet the vagueness of the scope of such software might make it ineligible for protection by copyright law. To properly understand and address this issue, this chapter will first review the current scope of computer program protection under copyright laws, and point out the potential inherent issues arising from the application of copyright law to software 2.0. After identifying related copyright law issues, this chapter will then examine the possible justification for protecting computer programs in the context of software 2.0, aiming to explore whether new exclusivity should be granted or not under copyright law, and if not, what alternatives are available to provide protection for the investment in the creation and maintenance of software 2.0.


Author(s):  
Atsushi Shibata ◽  
◽  
Fangyan Dong ◽  
Kaoru Hirota ◽  

A hierarchical force-directed graph drawing is proposed for the analysis of a neural network structure that expresses the relationship between multitask and processes in neural networks represented as neuron clusters. The process revealed by our proposal indicates the neurons that are related to each task and the number of neurons or learning epochs that are sufficient. Our proposal is evaluated by visualizing neural networks learned on the Mixed National Institute of Standards and Technology (MNIST) database of handwritten digits, and the results show that inactive neurons, namely those that do not have a close relationship with any tasks, are located on the periphery part of the visualized network, and that cutting half of the training data on one specific task (out of ten) causes a 15% increase in the variance of neurons in clusters that react to the specific task compared to the reaction to all tasks. The proposal aims to be developed in order to support the design process of neural networks that consider multitasking of different categories, for example, one neural network for both the vision and motion system of a robot.


2015 ◽  
Vol 734 ◽  
pp. 642-645
Author(s):  
Yan Hui Liu ◽  
Zhi Peng Wang

According to the problem that the letters identification is not high accuracy using neural networks, in this paper, an optimal neural network structure is designed based on genetic algorithm to optimize the number of hidden layer. The English letters can be identified by optimal neural network. The results obtained in the genetic programming optimizations are very satisfactory. Experiments show that the identification system has higher accuracy and achieved good ideal letters identification effect.


2013 ◽  
Vol 479-480 ◽  
pp. 445-450
Author(s):  
Sung Yun Park ◽  
Sangjoon Lee ◽  
Jae Hoon Jeong ◽  
Sung Min Kim

The purpose of this study is to develop an appendicitis diagnosis system, by using artificial neural networks (ANNs). Acute appendicitis is one of the most common surgical emergencies of the abdomen. Various methods have been developed to diagnose appendicitis, but these methods have not shown good performance in the Middle East and Asia, or even in the West. We used the structures of ANNs with 801 patients. These various structures are a multilayer neural network structure (MLNN), a radial basis function neural network structure (RBF), and a probabilistic neural network structure (PNN). The Alvarado clinical scoring system was used for comparison with the ANNs. The accuracy of MLNN, RBF, PNN, and Alvarado was 97.84%, 99.80%, 99.41% and 72.19%, respectively. The AUC of MLNN, RBF, PNN, and Alvarado was 0.985, 0.998, 0.993, and 0.633, respectively. The performance of ANNs was significantly better than the Alvarado clinical scoring system (P<0.001). The models developed to diagnose appendicitis using ANNs showed good performance. We consider that the developed models can help junior clinical surgeons diagnose appendicitis.


Sign in / Sign up

Export Citation Format

Share Document