The Crystallizing Substochastic Sequential Machine Extractor: CrySSMEx

2006 ◽  
Vol 18 (9) ◽  
pp. 2211-2255 ◽  
Author(s):  
Henrik Jacobsson

This letter presents an algorithm, CrySSMEx, for extracting minimal finite state machine descriptions of dynamic systems such as recurrent neural networks. Unlike previous algorithms, CrySSMEx is parameter free and deterministic, and it efficiently generates a series of increasingly refined models. A novel finite stochastic model of dynamic systems and a novel vector quantization function have been developed to take into account the state-space dynamics of the system. The experiments show that (1) extraction from systems that can be described as regular grammars is trivial, (2) extraction from high-dimensional systems is feasible, and (3) extraction of approximative models from chaotic systems is possible. The results are promising, and an analysis of shortcomings suggests some possible further improvements. Some largely overlooked connections, of the field of rule extraction from recurrent neural networks, to other fields are also identified.

1996 ◽  
Vol 8 (6) ◽  
pp. 1135-1178 ◽  
Author(s):  
Mike Casey

Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.


1998 ◽  
Vol 10 (5) ◽  
pp. 1067-1069 ◽  
Author(s):  
Mike Casey

Our earlier article, “The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction” (Casey, 1996), contains a corollary that shows that finite dimensional recurrent neural networks with noise in their state variables that perform algorithmic computations can perform only finite state machine computations. The proof of the corollary is technically incorrect. The problem resulted from the fact that the proof of the theorem on which the corollary is based was more general than the statement of the theorem, and it was the contents of the proof rather than the statement that were used to prove the corollary. In this note, we state the theorem in the necessary generality and then give the corrected proof of the corollary.


Technologies ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 110 ◽  
Author(s):  
Gadelhag Mohmed ◽  
Ahmad Lotfi ◽  
Amir Pourabdollah

Human activity recognition and modelling comprise an area of research interest that has been tackled by many researchers. The application of different machine learning techniques including regression analysis, deep learning neural networks, and fuzzy rule-based models has already been investigated. In this paper, a novel method based on Fuzzy Finite State Machine (FFSM) integrated with the learning capabilities of Neural Networks (NNs) is proposed to represent human activities in an intelligent environment. The proposed approach, called Neuro-Fuzzy Finite State Machine (N-FFSM), is able to learn the parameters of a rule-based fuzzy system, which processes the numerical input/output data gathered from the sensors and/or human experts’ knowledge. Generating fuzzy rules that represent the transition between states leads to assigning a degree of transition from one state to another. Experimental results are presented to demonstrate the effectiveness of the proposed method. The model is tested and evaluated using a dataset collected from a real home environment. The results show the effectiveness of using this method for modelling the activities of daily living based on ambient sensory datasets. The performance of the proposed method is compared with the standard NNs and FFSM techniques.


2003 ◽  
Vol 15 (8) ◽  
pp. 1931-1957 ◽  
Author(s):  
Peter Tiňo ◽  
Barbara Hammer

We have recently shown that when initialized with “small” weights, recurrent neural networks (RNNs) with standard sigmoid-type activation functions are inherently biased toward Markov models; even prior to any training, RNN dynamics can be readily used to extract finite memory machines (Hammer & Tiňo, 2002; Tiňo, Čerňanský, &Beňušková, 2002a, 2002b). Following Christiansen and Chater (1999), we refer to this phenomenon as the architectural bias of RNNs. In this article, we extend our work on the architectural bias in RNNs by performing a rigorous fractal analysis of recurrent activation patterns. We assume the network is driven by sequences obtained by traversing an underlying finite-state transition diagram&a scenario that has been frequently considered in the past, for example, when studying RNN-based learning and implementation of regular grammars and finite-state transducers. We obtain lower and upper bounds on various types of fractal dimensions, such as box counting and Hausdorff dimensions. It turns out that not only can the recurrent activations inside RNNs with small initial weights be explored to build Markovian predictive models, but also the activations form fractal clusters, the dimension of which can be bounded by the scaled entropy of the underlying driving source. The scaling factors are fixed and are given by the RNN parameters.


2020 ◽  
Author(s):  
Laércio Oliveira Junior ◽  
Florian Stelzer ◽  
Liang Zhao

Echo State Networks (ESNs) are recurrent neural networks that map an input signal to a high-dimensional dynamical system, called reservoir, and possess adaptive output weights. The output weights are trained such that the ESN’s output signal fits the desired target signal. Classical reservoirs are sparse and randomly connected networks. In this article, we investigate the effect of different network topologies on the performance of ESNs. Specifically, we use two types of networks to construct clustered reservoirs of ESN: the clustered Erdös–Rényi and the clustered Barabási-Albert network model. Moreover, we compare the performance of these clustered ESNs (CESNs) and classical ESNs with the random reservoir by employing them to two different tasks: frequency filtering and the reconstruction of chaotic signals. By using a clustered topology, one can achieve a significant increase in the ESN’s performance.


1999 ◽  
Vol 09 (04) ◽  
pp. 705-711 ◽  
Author(s):  
GIUSEPPE GRASSI ◽  
SAVERIO MASCOLO

In this paper a method for synchronizing high dimensional chaotic systems is developed. The objective is to generate a linear error dynamics between the master and the slave systems, so that synchronization is achievable by exploiting the controllability property of linear systems. The suggested approach is applied to Cellular Neural Networks (CNNs), which can be considered as a tool for generating complex hyperchaotic behaviors. Numerical simulations are carried out for synchronizing CNNs constituted by Chua's circuits.


Sign in / Sign up

Export Citation Format

Share Document