First-order versus second-order single-layer recurrent neural networks

1994 ◽  
Vol 5 (3) ◽  
pp. 511-513 ◽  
Author(s):  
M.W. Goudreau ◽  
C.L. Giles ◽  
S.T. Chakradhar ◽  
D. Chen
Author(s):  
CLIFFORD B. MILLER ◽  
C. LEE GILES

There has been much interest in increasing the computational power of neural networks. In addition there has been much interest in “designing” neural networks better suited to particular problems. Increasing the “order” of the connectivity of a neural network permits both. Though order has played a significant role in feedforward neural networks, its role in dynamically driven recurrent networks is still being understood. This work explores the effect of order in learning grammars. We present an experimental comparison of first order and second order recurrent neural networks, as applied to the task of grammatical inference. We show that for the small grammars studied these two neural net architectures have comparable learning and generalization power, and that both are reasonably capable of extracting the correct finite state automata for the language in question. However, for a larger randomly-generated ten-state grammar, second order networks significantly outperformed the first order networks, both in convergence time and generalization capability. We show that these networks learn faster the more neurons they have (our experiments used up to 10 hidden neurons), but that the solutions found by smaller networks are usually of better quality (in terms of generalization performance after training). Second order nets have the advantage that they converge more quickly to a solution and can find it more reliably than first order nets, but that the second order solutions tend to be of poorer quality than those of the first order if both architectures are trained to the same error tolerance. Despite this, second order nets can more successfully extract finite state machines using heuristic clustering techniques applied to the internal state representations. We speculate that this may be due to restrictions on the ability of first order architecture to fully make use of its internal state representation power and that this may have implications for the performance of the two architectures when scaled up to larger problems.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Laura Gagliano ◽  
Elie Bou Assi ◽  
Dang K. Nguyen ◽  
Mohamad Sawan

Abstract This work proposes a novel approach for the classification of interictal and preictal brain states based on bispectrum analysis and recurrent Long Short-Term Memory (LSTM) neural networks. Two features were first extracted from bilateral intracranial electroencephalography (iEEG) recordings of dogs with naturally occurring focal epilepsy. Single-layer LSTM networks were trained to classify 5-min long feature vectors as preictal or interictal. Classification performances were compared to previous work involving multilayer perceptron networks and higher-order spectral (HOS) features on the same dataset. The proposed LSTM network proved superior to the multilayer perceptron network and achieved an average classification accuracy of 86.29% on held-out data. Results imply the possibility of forecasting epileptic seizures using recurrent neural networks, with minimal feature extraction.


2021 ◽  
Vol 27 (11) ◽  
pp. 1193-1202
Author(s):  
Ashot Baghdasaryan ◽  
Hovhannes Bolibekyan

There are three main problems for theorem proving with a standard cut-free system for the first order minimal logic. The first problem is the possibility of looping. Secondly, it might generate proofs which are permutations of each other. Finally, during the proof some choice should be made to decide which rules to apply and where to use them. New systems with history mechanisms were introduced for solving the looping problems of automated theorem provers in the first order minimal logic. In order to solve the rule selection problem, recurrent neural networks are deployed and they are used to determine which formula from the context should be used on further steps. As a result, it yields to the reduction of time during theorem proving.


Author(s):  
Sarat Chandra Nayak ◽  
Bijan Bihari Misra ◽  
Himansu Sekhar Behera

Financial time series forecasting has been regarded as a challenging issue because of successful prediction could yield significant profit, hence require an efficient prediction system. Conventional ANN based models are not competent systems. Higher order neural networks have several advantages over traditional neural networks such as stronger approximation, higher fault tolerance capacity and faster convergence. With the aim of achieving improved forecasting accuracy, this article develops and evaluates the performance of an adaptive single layer second order neural network with GA based training (ASONN-GA). The global search ability of GA has been incorporated with the better generalization ability of a second order neural network and the model is found quite capable in handling the uncertainties and nonlinearities associated with the financial time series. The model takes minimal input data and considered the partially optimized weight set from previous training, hence a significant reduction in training time. The efficiency of the model has been evaluated by forecasting one-step-ahead closing prices and exchange rates of five real stock markets and it is revealed that the ASONN-GA model achieves better forecasting accuracy over other state of the art models.


Sign in / Sign up

Export Citation Format

Share Document