scholarly journals On the Spatiotemporal Behavior in Biology-Mimicking Computing Systems

2021 ◽  
Author(s):  
Janos Vegh ◽  
Ádám József Berki

Abstract Both the growing demand to cope with ”big data” (based on, or assisted by, artificial intelligence) and the interest in understanding the operation of our brain more completely, stimulated the efforts to build biology-mimicking computing systems from inexpensive conventional components and build different (”neuro-morphic”) computing systems. On one side, those systems require an unusually large number of processors, which introduces performance limitations and nonlinear scaling. On the other side, the neuronal operation drastically differs from the conventional workloads. The conduction time (transfer time) is ignored in both in conventional computing and ”spatiotemporal” computational models of neural networks, although von Neumann warned: ”In the human nervous system the con-duction times along the lines (axons) can be longer than the synaptic delays, hence our above procedure of neglecting them aside of τ [the processing time] would be unsound” [1], section 6.3. This difference alone makes imitating biological behavior in technical implementation hard. Besides, the recent issues in computing called the attention to that the temporal behavior is a general feature of computing systems, too. Some of their effects in both biological and technical systems were al-ready noticed. Instead of introducing some ”looks like” models, the correct handling of the transfer time is suggested here. Introducing the temporal logic, based on the Minkowski transform, gives quantitative insight into the operation of both kinds of computing systems, furthermore provides a natural explanation of decades-old empirical phenomena. Without considering their temporal behavior correctly, neither effective implementation nor a true imitation of biological neural systems are possible.

2020 ◽  
Author(s):  
Janos Vegh ◽  
Ádám József Berki

Abstract Both the growing demand to cope with "big data" (based on, or assisted by, artificial intelligence) and the interest in understanding the operation of our brain more completely, stimulated the efforts to build biology-mimicking computing systems from inexpensive conventional components and build different ("neuromorphic") computing systems. On one side, those systems require an unusually large number of processors, which introduces performance limitations and nonlinear scaling. On the other side, the neuronal operation drastically differs from the conventional workloads. The conduction time (transfer time) is ignored in both in conventional computing and "spatiotemporal" computational models of neural networks, although von Neumann warned: "In the human nervous system the conduction times along the lines (axons) can be longer than the synaptic delays, hence our procedure of neglecting them aside of the processing time would be unsound" [1], section 6.3. This difference alone makes imitating biological behavior in technical implementation hard. Besides, the recent issues in computing called the attention to that the temporal behavior is a general feature of computing systems, too. Some of their effects in both biological and technical systems were already noticed. Instead of introducing some "looks like" models, the correct handling of the transfer time is suggested here. Introducing the temporal logic, based on the Minkowski transform, gives quantitative insight into the operation of both kinds of computing systems, furthermore provides a natural explanation of decades-old empirical phenomena. Without considering their temporal behavior correctly, neither effective implementation nor a true imitation of biological neural systems are possible.


Informatics ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 71
Author(s):  
János Végh

Today’s computing is based on the classic paradigm proposed by John von Neumann, three-quarters of a century ago. That paradigm, however, was justified for (the timing relations of) vacuum tubes only. The technological development invalidated the classic paradigm (but not the model!). It led to catastrophic performance losses in computing systems, from the operating gate level to large networks, including the neuromorphic ones. The model is perfect, but the paradigm is applied outside of its range of validity. The classic paradigm is completed here by providing the “procedure” missing from the “First Draft” that enables computing science to work with cases where the transfer time is not negligible apart from the processing time. The paper reviews whether we can describe the implemented computing processes by using the accurate interpretation of the computing model, and whether we can explain the issues experienced in different fields of today’s computing by omitting the wrong omissions. Furthermore, it discusses some of the consequences of improper technological implementations, from shared media to parallelized operation, suggesting ideas on how computing performance could be improved to meet the growing societal demands.


Author(s):  
Janos Vegh

Classic science seemed to be completed more than a century ago, facing only a few (but growing number of!) unexplained issues. Introducing time-dependence into classic science explained those issues, and its consequent use led to the birth of a series of modern sciences, including relativistic and quantum physics. Classic computing is based on the paradigm proposed by von Neumann for vacuum tubes only, which seems to be completed in the same sense. Von Neumann warned, however, that implementing computers under more advanced technological conditions, using the paradigm without considering the transfer time (and especially attempting to imitate neural operation), would be unsound. However, classic computing science persists in neglecting the transfer time and is facing a few (but growing number of!) unexplained issues, and its development stalled in most of its fields. Introducing time-dependence into the classic computing science explains those issues and discovers the reasons for its experienced stalling. It can lead to a revolution in computing, resulting in a modern computing science, in the same way, as it resulted in modern science's birth.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1526 ◽  
Author(s):  
Choongmin Kim ◽  
Jacob A. Abraham ◽  
Woochul Kang ◽  
Jaeyong Chung

Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.


2018 ◽  
Vol 157 ◽  
pp. 02054 ◽  
Author(s):  
Milan Vaško ◽  
Marián Handrik ◽  
Alžbeta Sapietová ◽  
Jana Handriková

The paper presents an analysis of the use of optimization algorithms in parallel solutions and distributed computing systems. The primary goal is to use evolutionary algorithms and their implementation into parallel calculations. Parallelization of computational algorithms is suitable for the following cases - computational models with a large number of design variables or cases where the objective function evaluation is time consuming (FE analysis). As the software platform for application of distributed optimization algorithms is using MATLAB and BOINC software package.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 396 ◽  
Author(s):  
Errui Zhou ◽  
Liang Fang ◽  
Binbin Yang

Neuromorphic computing systems are promising alternatives in the fields of pattern recognition, image processing, etc. especially when conventional von Neumann architectures face several bottlenecks. Memristors play vital roles in neuromorphic computing systems and are usually used as synaptic devices. Memristive spiking neural networks (MSNNs) are considered to be more efficient and biologically plausible than other systems due to their spike-based working mechanism. In contrast to previous SNNs with complex architectures, we propose a hardware-friendly architecture and an unsupervised spike-timing dependent plasticity (STDP) learning method for MSNNs in this paper. The architecture, which is friendly to hardware implementation, includes an input layer, a feature learning layer and a voting circuit. To reduce hardware complexity, some constraints are enforced: the proposed architecture has no lateral inhibition and is purely feedforward; it uses the voting circuit as a classifier and does not use additional classifiers; all neurons can generate at most one spike and do not need to consider firing rates and refractory periods; all neurons have the same fixed threshold voltage for classification. The presented unsupervised STDP learning method is time-dependent and uses no homeostatic mechanism. The MNIST dataset is used to demonstrate our proposed architecture and learning method. Simulation results show that our proposed architecture with the learning method achieves a classification accuracy of 94.6%, which outperforms other unsupervised SNNs that use time-based encoding schemes.


2001 ◽  
Vol 24 (5) ◽  
pp. 812-813
Author(s):  
Roman Borisyuk

Experimental evidence and mathematical/computational models show that in many cases chaotic, nonregular oscillations are adequate to describe the dynamical behaviour of neural systems. Further work is needed to understand the meaning of this dynamical regime for modelling information processing in the brain.


1998 ◽  
Vol 4 (3) ◽  
pp. 229-235 ◽  
Author(s):  
Pierre Marchal

Aside from being known for his contributions to mathematics and physics, John von Neumann is considered one of the founding fathers of computer science and engineering. Not only did he do pioneering work on sequential computing systems, but he also carried out a major investigation of parallel architectures, leading to his work on cellular automata. His exceptional vision and daring, borrowing from biology the concept of genomic information even before the discovery of DNA's double helix, led him to propose the concept of self-reproducing automata.


Author(s):  
Patricia L Lockwood ◽  
Miriam C Klein-Flügge

Abstract Social neuroscience aims to describe the neural systems that underpin social cognition and behaviour. Over the past decade, researchers have begun to combine computational models with neuroimaging to link social computations to the brain. Inspired by approaches from reinforcement learning theory, which describes how decisions are driven by the unexpectedness of outcomes, accounts of the neural basis of prosocial learning, observational learning, mentalizing and impression formation have been developed. Here we provide an introduction for researchers who wish to use these models in their studies. We consider both theoretical and practical issues related to their implementation, with a focus on specific examples from the field.


Sign in / Sign up

Export Citation Format

Share Document