scholarly journals Expressive power of first-order recurrent neural networks determined by their attractor dynamics

2016 ◽  
Vol 82 (8) ◽  
pp. 1232-1250 ◽  
Author(s):  
Jérémie Cabessa ◽  
Alessandro E.P. Villa
2012 ◽  
Vol 436 ◽  
pp. 23-34 ◽  
Author(s):  
Jérémie Cabessa ◽  
Alessandro E.P. Villa

2011 ◽  
Vol 74 (17) ◽  
pp. 2716-2724 ◽  
Author(s):  
Louiza Dehyadegary ◽  
Seyyed Ali Seyyedsalehi ◽  
Isar Nejadgholi

Author(s):  
CLIFFORD B. MILLER ◽  
C. LEE GILES

There has been much interest in increasing the computational power of neural networks. In addition there has been much interest in “designing” neural networks better suited to particular problems. Increasing the “order” of the connectivity of a neural network permits both. Though order has played a significant role in feedforward neural networks, its role in dynamically driven recurrent networks is still being understood. This work explores the effect of order in learning grammars. We present an experimental comparison of first order and second order recurrent neural networks, as applied to the task of grammatical inference. We show that for the small grammars studied these two neural net architectures have comparable learning and generalization power, and that both are reasonably capable of extracting the correct finite state automata for the language in question. However, for a larger randomly-generated ten-state grammar, second order networks significantly outperformed the first order networks, both in convergence time and generalization capability. We show that these networks learn faster the more neurons they have (our experiments used up to 10 hidden neurons), but that the solutions found by smaller networks are usually of better quality (in terms of generalization performance after training). Second order nets have the advantage that they converge more quickly to a solution and can find it more reliably than first order nets, but that the second order solutions tend to be of poorer quality than those of the first order if both architectures are trained to the same error tolerance. Despite this, second order nets can more successfully extract finite state machines using heuristic clustering techniques applied to the internal state representations. We speculate that this may be due to restrictions on the ability of first order architecture to fully make use of its internal state representation power and that this may have implications for the performance of the two architectures when scaled up to larger problems.


2021 ◽  
Vol 27 (11) ◽  
pp. 1193-1202
Author(s):  
Ashot Baghdasaryan ◽  
Hovhannes Bolibekyan

There are three main problems for theorem proving with a standard cut-free system for the first order minimal logic. The first problem is the possibility of looping. Secondly, it might generate proofs which are permutations of each other. Finally, during the proof some choice should be made to decide which rules to apply and where to use them. New systems with history mechanisms were introduced for solving the looping problems of automated theorem provers in the first order minimal logic. In order to solve the rule selection problem, recurrent neural networks are deployed and they are used to determine which formula from the context should be used on further steps. As a result, it yields to the reduction of time during theorem proving.


2022 ◽  
Author(s):  
Leo Kozachkov ◽  
John Tauber ◽  
Mikael Lundqvist ◽  
Scott L Brincat ◽  
Jean-Jacques Slotine ◽  
...  

Working memory has long been thought to arise from sustained spiking/attractor dynamics. However, recent work has suggested that short-term synaptic plasticity (STSP) may help maintain attractor states over gaps in time with little or no spiking. To determine if STSP endows additional functional advantages, we trained artificial recurrent neural networks (RNNs) with and without STSP to perform an object working memory task. We found that RNNs with and without STSP were both able to maintain memories over distractors presented in the middle of the memory delay. However, RNNs with STSP showed activity that was similar to that seen in the cortex of monkeys performing the same task. By contrast, RNNs without STSP showed activity that was less brain-like. Further, RNNs with STSP were more robust to noise and network degradation than RNNs without STSP. These results show that STSP can not only help maintain working memories, it also makes neural networks more robust.


Sign in / Sign up

Export Citation Format

Share Document