On the Computational Power of Threshold Circuits with Sparse Activity

2006 ◽  
Vol 18 (12) ◽  
pp. 2994-3008 ◽  
Author(s):  
Kei Uchizawa ◽  
Rodney Douglas ◽  
Wolfgang Maass

Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task, they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a 1 as output) during a computation. By contrast, the activity of neurons in the brain is much sparser, with only about 1% of neurons firing. This mismatch between threshold and neuronal circuits is due to the particular complexity measures (circuit size and circuit depth) that have been minimized in previous threshold circuit constructions. In this letter, we investigate a new complexity measure for threshold circuits, energy complexity, whose minimization yields computations with sparse activity. We prove that all computations by threshold circuits of polynomial size with entropy O(log n) can be restructured so that their energy complexity is reduced to a level near the entropy of circuit states. This entropy of circuit states is a novel circuit complexity measure, which is of interest not only in the context of threshold circuits but for circuit complexity in general. As an example of how this measure can be applied, we show that any polynomial size threshold circuit with entropy O(log n) can be simulated by a polynomial size threshold circuit of depth 3. Our results demonstrate that the structure of circuits that result from a minimization of their energy complexity is quite different from the structure that results from a minimization of previously considered complexity measures, and potentially closer to the structure of neural circuits in the nervous system. In particular, different pathways are activated in these circuits for different classes of inputs. This letter shows that such circuits with sparse activity have a surprisingly large computational power.

1988 ◽  
Vol 17 (239) ◽  
Author(s):  
Joan Boyar ◽  
Gudmund Skovbjerg Frandsen ◽  
Carl Sturtivant

We define a new structured and general model of computation: circuits using arbitrary fan- in arithmetic gates over the characteristic two finite fields (<strong>F</strong>_2n). These circuits have only one input and one output. We show how they correspond naturally to boolean computations with n inputs and n outputs. We show that if circuit sizes are polynomially related then the arithmetic circuit depth and the threshold circuit depth to compute a given function differ by at most a constant factor. We use threshold circuits that allow arbitrary integer weights; however, we show that when compared to the usual threshold model, the depth measure of this generalised model only differs by at most a constant factor (at polynomial size). The fan-in of our arithmetic model is also unbounded in the most generous sense: circuit size is measured as the number of Sum and ½ gates; there is no bound on the number of ''wires'' . We show that these results are provable for any ''reasonable'' correspondance between bit strings of n-bits and elements of <strong>F</strong>_ 2n. And, we find two distinct characterizations of ''reasonable''. Thus, we have shown that arbitrary fan-in arithmetic computations over <strong>F</strong>_ 2n constitute a precise abstraction of boolean threshold computations with the pleasant property that various algebraic laws have been recovered.


1991 ◽  
Vol 20 (343) ◽  
Author(s):  
Gudmund Skovbjerg Frandsen ◽  
Mark Valence ◽  
David Mix Barrington

We introduce a natural set of arithmetic expressions and define the complexity class AE to consist of all those arithmetic functions (over the fields F_(2)n) that are described by these expressions. We show that AE coincides with the class of functions that are computable with constant depth and polynomial size unbounded fan-in arithmetic circuits satisfying a natural uniformity constraint (DLOGTIME-uniformity). A 1-input and 1-output arithmetic function over the fields F_(2)n may be identified with an <em>n</em>-input and an n-output Boolean function when field elements are represented as bit strings. We prove that if some such representation is X-uniform (where X is P or DLOGTIME) then the arithmetic complexity of a function (measured with X-uniform unbounded fan-in arithmetic circuits) is identical to the Boolean complexity of this function (measured with X-uniform threshold circuits). We show the existence of a P-uniform representation and we give partial results concerning the existence of representations with more restrictive uniformity properties.


2010 ◽  
Vol 24 (2) ◽  
pp. 131-135 ◽  
Author(s):  
Włodzimierz Klonowski ◽  
Pawel Stepien ◽  
Robert Stepien

Over 20 years ago, Watt and Hameroff (1987 ) suggested that consciousness may be described as a manifestation of deterministic chaos in the brain/mind. To analyze EEG-signal complexity, we used Higuchi’s fractal dimension in time domain and symbolic analysis methods. Our results of analysis of EEG-signals under anesthesia, during physiological sleep, and during epileptic seizures lead to a conclusion similar to that of Watt and Hameroff: Brain activity, measured by complexity of the EEG-signal, diminishes (becomes less chaotic) when consciousness is being “switched off”. So, consciousness may be described as a manifestation of deterministic chaos in the brain/mind.


Author(s):  
Hiroki MANIWA ◽  
Takayuki OKI ◽  
Akira SUZUKI ◽  
Kei UCHIZAWA ◽  
Xiao ZHOU

2014 ◽  
Vol 16 (28) ◽  
pp. 14928-14946 ◽  
Author(s):  
Meressa A. Welearegay ◽  
Robert Balawender ◽  
Andrzej Holas

The usefulness of the information and complexity measure in molecular reactivity studies.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


2019 ◽  
Vol 7 (18) ◽  
pp. 3085-3089
Author(s):  
Massimo Fioranelli ◽  
Alireza Sepehri ◽  
Maria Grazia Roccia ◽  
Cota Linda ◽  
Chiara Rossi ◽  
...  

To recover chick embryos damaged the brain, two methods are presented. In both of them, somatic cells of an embryo introduced into an egg cell and an embryo have emerged. In one method, injured a part of the brain in the head of an embryo is replaced with a healthy part of the brain. In the second method, the heart of brain embryo dead is transplanted with the embryo heart. In this mechanism, new blood cells are emerged in the bone marrow and transmit information of transplantation to subventricular zone (SVZ) of the brain through the circulatory system. Then, SVZ produces new neural stem cells by a subsequent dividing into neurons. These neurons produce new neural circuits within the brain and recover the injured brain. To examine the model, two hearts of two embryos are connected, and their effects on neural circuits are observed.  


Author(s):  
Samantha Hughes ◽  
Tansu Celikel

From single-cell organisms to complex neural networks, all evolved to provide control solutions to generate context and goal-specific actions. Neural circuits performing sensorimotor computation to drive navigation employ inhibitory control as a gating mechanism, as they hierarchically transform (multi)sensory information into motor actions. Here, we focus on this literature to critically discuss the proposition that prominent inhibitory projections form sensorimotor circuits. After reviewing the neural circuits of navigation across various invertebrate species, we argue that with increased neural circuit complexity and the emergence of parallel computations inhibitory circuits acquire new functions. The contribution of inhibitory neurotransmission for navigation goes beyond shaping the communication that drives motor neurons, instead, include encoding of emergent sensorimotor representations. A mechanistic understanding of the neural circuits performing sensorimotor computations in invertebrates will unravel the minimum circuit requirements driving adaptive navigation.


Sign in / Sign up

Export Citation Format

Share Document