A Computational Model of Computer Worms Based on Persistent Turing Machines

Author(s):  
Jingbo Hao ◽  
Jianping Yin ◽  
Boyun Zhang
2002 ◽  
Vol 14 (11) ◽  
pp. 2531-2560 ◽  
Author(s):  
Wolfgang Maass ◽  
Thomas Natschläger ◽  
Henry Markram

A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.


Author(s):  
Susan Ella George

As we shall see, the “theology of technology” can help inform the philosophical underpinnings of AI. We start with elucidating the idea of computation, and describe the idea of Turing machine computation. Its equivalence with Post systems and the lambda calculus are explained, and the way that these systems may be regarded as “rule based” and “generative” are brought out. All the equivalent formal models define enumerable languages. However, as Turing’s original definition demonstrated, there are definable numbers that are not computable, that is, a computer could not be used to write some numbers down, yet they exist. The presence of “unsolvable” computational problems also reveals the limitations of Turing machines, and suggests the current limits of computation. While the “intuitive” understanding of computation is one of “step-by-step” algorithmic procedure, it will be hard to conceive of any other computational model.


Author(s):  
Paul Van Den Broek ◽  
Yuhtsuen Tzeng ◽  
Sandy Virtue ◽  
Tracy Linderholm ◽  
Michael E. Young

1992 ◽  
Author(s):  
William A. Johnston ◽  
Kevin J. Hawley ◽  
James M. Farnham
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document