scholarly journals Revising the Classic Computing Paradigm and Its Technological Implementations

Informatics ◽  
2021 ◽  
Vol 8 (4) ◽  
pp. 71
Author(s):  
János Végh

Today’s computing is based on the classic paradigm proposed by John von Neumann, three-quarters of a century ago. That paradigm, however, was justified for (the timing relations of) vacuum tubes only. The technological development invalidated the classic paradigm (but not the model!). It led to catastrophic performance losses in computing systems, from the operating gate level to large networks, including the neuromorphic ones. The model is perfect, but the paradigm is applied outside of its range of validity. The classic paradigm is completed here by providing the “procedure” missing from the “First Draft” that enables computing science to work with cases where the transfer time is not negligible apart from the processing time. The paper reviews whether we can describe the implemented computing processes by using the accurate interpretation of the computing model, and whether we can explain the issues experienced in different fields of today’s computing by omitting the wrong omissions. Furthermore, it discusses some of the consequences of improper technological implementations, from shared media to parallelized operation, suggesting ideas on how computing performance could be improved to meet the growing societal demands.

1998 ◽  
Vol 4 (3) ◽  
pp. 229-235 ◽  
Author(s):  
Pierre Marchal

Aside from being known for his contributions to mathematics and physics, John von Neumann is considered one of the founding fathers of computer science and engineering. Not only did he do pioneering work on sequential computing systems, but he also carried out a major investigation of parallel architectures, leading to his work on cellular automata. His exceptional vision and daring, borrowing from biology the concept of genomic information even before the discovery of DNA's double helix, led him to propose the concept of self-reproducing automata.


2021 ◽  
Author(s):  
Janos Vegh ◽  
Ádám József Berki

Abstract Both the growing demand to cope with ”big data” (based on, or assisted by, artificial intelligence) and the interest in understanding the operation of our brain more completely, stimulated the efforts to build biology-mimicking computing systems from inexpensive conventional components and build different (”neuro-morphic”) computing systems. On one side, those systems require an unusually large number of processors, which introduces performance limitations and nonlinear scaling. On the other side, the neuronal operation drastically differs from the conventional workloads. The conduction time (transfer time) is ignored in both in conventional computing and ”spatiotemporal” computational models of neural networks, although von Neumann warned: ”In the human nervous system the con-duction times along the lines (axons) can be longer than the synaptic delays, hence our above procedure of neglecting them aside of τ [the processing time] would be unsound” [1], section 6.3. This difference alone makes imitating biological behavior in technical implementation hard. Besides, the recent issues in computing called the attention to that the temporal behavior is a general feature of computing systems, too. Some of their effects in both biological and technical systems were al-ready noticed. Instead of introducing some ”looks like” models, the correct handling of the transfer time is suggested here. Introducing the temporal logic, based on the Minkowski transform, gives quantitative insight into the operation of both kinds of computing systems, furthermore provides a natural explanation of decades-old empirical phenomena. Without considering their temporal behavior correctly, neither effective implementation nor a true imitation of biological neural systems are possible.


Author(s):  
Janos Vegh

Classic science seemed to be completed more than a century ago, facing only a few (but growing number of!) unexplained issues. Introducing time-dependence into classic science explained those issues, and its consequent use led to the birth of a series of modern sciences, including relativistic and quantum physics. Classic computing is based on the paradigm proposed by von Neumann for vacuum tubes only, which seems to be completed in the same sense. Von Neumann warned, however, that implementing computers under more advanced technological conditions, using the paradigm without considering the transfer time (and especially attempting to imitate neural operation), would be unsound. However, classic computing science persists in neglecting the transfer time and is facing a few (but growing number of!) unexplained issues, and its development stalled in most of its fields. Introducing time-dependence into the classic computing science explains those issues and discovers the reasons for its experienced stalling. It can lead to a revolution in computing, resulting in a modern computing science, in the same way, as it resulted in modern science's birth.


2020 ◽  
Author(s):  
Janos Vegh ◽  
Ádám József Berki

Abstract Both the growing demand to cope with "big data" (based on, or assisted by, artificial intelligence) and the interest in understanding the operation of our brain more completely, stimulated the efforts to build biology-mimicking computing systems from inexpensive conventional components and build different ("neuromorphic") computing systems. On one side, those systems require an unusually large number of processors, which introduces performance limitations and nonlinear scaling. On the other side, the neuronal operation drastically differs from the conventional workloads. The conduction time (transfer time) is ignored in both in conventional computing and "spatiotemporal" computational models of neural networks, although von Neumann warned: "In the human nervous system the conduction times along the lines (axons) can be longer than the synaptic delays, hence our procedure of neglecting them aside of the processing time would be unsound" [1], section 6.3. This difference alone makes imitating biological behavior in technical implementation hard. Besides, the recent issues in computing called the attention to that the temporal behavior is a general feature of computing systems, too. Some of their effects in both biological and technical systems were already noticed. Instead of introducing some "looks like" models, the correct handling of the transfer time is suggested here. Introducing the temporal logic, based on the Minkowski transform, gives quantitative insight into the operation of both kinds of computing systems, furthermore provides a natural explanation of decades-old empirical phenomena. Without considering their temporal behavior correctly, neither effective implementation nor a true imitation of biological neural systems are possible.


2004 ◽  
Vol 174 (12) ◽  
pp. 1371 ◽  
Author(s):  
Mikhail I. Monastyrskii
Keyword(s):  

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1526 ◽  
Author(s):  
Choongmin Kim ◽  
Jacob A. Abraham ◽  
Woochul Kang ◽  
Jaeyong Chung

Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.


2000 ◽  
Vol 6 (4) ◽  
pp. 347-361 ◽  
Author(s):  
Barry McMullin

In the late 1940s John von Neumann began to work on what he intended as a comprehensive “theory of [complex] automata.” He started to develop a book length manuscript on the subject in 1952. However, he put it aside in 1953, apparently due to pressure of other work. Due to his tragically early death in 1957, he was never to return to it. The draft manuscript was eventually edited, and combined for publication with some related lecture transcripts, by Burks in 1966. It is clear from the time and effort that von Neumann invested in it that he considered this to be a very significant and substantial piece of work. However, subsequent commentators (beginning even with Burks) have found it surprisingly difficult to articulate this substance. Indeed, it has since been suggested that von Neumann's results in this area either are trivial, or, at the very least, could have been achieved by much simpler means. It is an enigma. In this paper I review the history of this debate (briefly) and then present my own attempt at resolving the issue by focusing on an analysis of von Neumann's problem situation. I claim that this reveals the true depth of von Neumann's achievement and influence on the subsequent development of this field, and further that it generates a whole family of new consequent problems, which can still serve to inform—if not actually define—the field of artificial life for many years to come.


Author(s):  
Ziling Wang ◽  
Li Luo ◽  
Jie Li ◽  
Lidan Wang ◽  
shukai duan

Abstract In-memory computing is highly expected to break the von Neumann bottleneck and memory wall. Memristor with inherent nonvolatile property is considered to be a strong candidate to execute this new computing paradigm. In this work, we have presented a reconfigurable nonvolatile logic method based on one-transistor-two-memristor (1T2M) device structure, inhibiting the sneak path in the large-scale crossbar array. By merely adjusting the applied voltage signals, all 16 binary Boolean logic functions can be achieved in a single cell. More complex computing tasks including one-bit parallel full adder and Set-Reset latch have also been realized with optimization, showing simple operation process, high flexibility, and low computational complexity. The circuit verification based on cadence PSpice simulation is also provided, proving the feasibility of the proposed design. The work in this paper is intended to make progress in constructing architectures for in-memory computing paradigm.


Sign in / Sign up

Export Citation Format

Share Document