analog computation
Recently Published Documents


TOTAL DOCUMENTS

188
(FIVE YEARS 12)

H-INDEX

21
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Corey J. Maley

Representation is typically taken to be importantly separate from its physical implementation. This is exemplified in Marr's three-level framework, widely cited and often adopted in neuroscience. However, the separation between representation and physical implementation is not a necessary feature of information-processing systems. In particular, when it comes to analog computational systems, Marr's representational/algorithmic level and implementational level collapse into a single level. Insofar as analog computation is a better way of understanding neural computation than other notions, Marr's three-level framework must then be amended into a two-level framework. However, far from being a problem or limitation, this sheds lights on how to understand physical media as being representational, but without a separate, medium-independent representational level.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Yin Wang ◽  
Hongwei Tang ◽  
Yufeng Xie ◽  
Xinyu Chen ◽  
Shunli Ma ◽  
...  

AbstractIn-memory computing may enable multiply-accumulate (MAC) operations, which are the primary calculations used in artificial intelligence (AI). Performing MAC operations with high capacity in a small area with high energy efficiency remains a challenge. In this work, we propose a circuit architecture that integrates monolayer MoS2 transistors in a two-transistor–one-capacitor (2T-1C) configuration. In this structure, the memory portion is similar to a 1T-1C Dynamic Random Access Memory (DRAM) so that theoretically the cycling endurance and erase/write speed inherit the merits of DRAM. Besides, the ultralow leakage current of the MoS2 transistor enables the storage of multi-level voltages on the capacitor with a long retention time. The electrical characteristics of a single MoS2 transistor also allow analog computation by multiplying the drain voltage by the stored voltage on the capacitor. The sum-of-product is then obtained by converging the currents from multiple 2T-1C units. Based on our experiment results, a neural network is ex-situ trained for image recognition with 90.3% accuracy. In the future, such 2T-1C units can potentially be integrated into three-dimensional (3D) circuits with dense logic and memory layers for low power in-situ training of neural networks in hardware.


Author(s):  
Diogo PoÇas ◽  
Jeffery Zucker

Abstract Analog computation attempts to capture any type of computation, that can be realized by any type of physical system or physical process, including but not limited to computation over continuous measurable quantities. A pioneering model is the General Purpose Analog Computer (GPAC), initially presented by Shannon in 1941. The GPAC is capable of manipulating real-valued data streams; however, it has been shown to be strictly less powerful than other models of computation on the reals, such as computable analysis. In previous work, we proposed an extension of the Shannon GPAC, denoted LGPAC, designed to overcome its limitations. Not only is the LGPAC model capable of expressing computation over general data spaces $\mathcal{X}$, but it also directly incorporates approximating computations by means of a limit module. An important feature of this work is the generalisation of the framework of the computation theory from Banach to Fréchet spaces. In this paper, we compare the LGPAC with a digital model of computation based on effective representations (tracking computability). We establish general conditions under which LGPAC-generable functions are tracking computable.


2020 ◽  
pp. 297-316
Author(s):  
Gualtiero Piccinini

This chapter rejects the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But typical neural signals, such as spike trains, are graded like continuous signals as well as constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. The chapter draws three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on nonneural notions of computation ought to be replaced or reinterpreted in terms of neural computation.


2020 ◽  
pp. 2000199
Author(s):  
Eva Bestelink ◽  
Olivier de Sagazan ◽  
Lea Motte ◽  
Max Bateson ◽  
Benedikt Schultes ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document