Computational Capacity of Complex Memcapacitive Networks

2021 ◽  
Vol 17 (2) ◽  
pp. 1-25
Author(s):  
Dat Tran ◽  
Christof Teuscher

Emerging memcapacitive nanoscale devices have the potential to perform computations in new ways. In this article, we systematically study, to the best of our knowledge for the first time, the computational capacity of complex memcapacitive networks, which function as reservoirs in reservoir computing, one of the brain-inspired computing architectures. Memcapacitive networks are composed of memcapacitive devices randomly connected through nanowires. Previous studies have shown that both regular and random reservoirs provide sufficient dynamics to perform simple tasks. How do complex memcapacitive networks illustrate their computational capability, and what are the topological structures of memcapacitive networks that solve complex tasks with efficiency? Studies show that small-world power-law (SWPL) networks offer an ideal trade-off between the communication properties and the wiring cost of networks. In this study, we illustrate the computing nature of SWPL memcapacitive reservoirs by exploring the two essential properties: fading memory and linear separation through measurements of kernel quality. Compared to ideal reservoirs, nanowire memcapacitive reservoirs had a better dynamic response and improved their performance by 4.67% on three tasks: MNIST, Isolated Spoken Digits, and CIFAR-10. On the same three tasks, compared to memristive reservoirs, nanowire memcapacitive reservoirs achieved comparable performance with much less power, on average, about 99× , 17×, and 277×, respectively. Simulation results of the topological transformation of memcapacitive networks reveal that that topological structures of the memcapacitive SWPL reservoirs did not affect their performance but significantly contributed to the wiring cost and the power consumption of the systems. The minimum trade-off between the wiring cost and the power consumption occurred at different network settings of α and β : 4.5 and 0.61 for Biolek reservoirs, 2.7 and 1.0 for Mohamed reservoirs, and 3.0 and 1.0 for Najem reservoirs. The results of our research illustrate the computational capacity of complex memcapacitive networks as reservoirs in reservoir computing. Such memcapacitive networks with an SWPL topology are energy-efficient systems that are suitable for low-power applications such as mobile devices and the Internet of Things.

Author(s):  
Matthew Dale ◽  
Julian F. Miller ◽  
Susan Stepney ◽  
Martin A. Trefzer

The reservoir computing (RC) framework states that any nonlinear, input-driven dynamical system (the reservoir ) exhibiting properties such as a fading memory and input separability can be trained to perform computational tasks. This broad inclusion of systems has led to many new physical substrates for RC. Properties essential for reservoirs to compute are tuned through reconfiguration of the substrate, such as change in virtual topology or physical morphology. As a result, each substrate possesses a unique ‘quality’—obtained through reconfiguration—to realize different reservoirs for different tasks. Here we describe an experimental framework to characterize the quality of potentially any substrate for RC. Our framework reveals that a definition of quality is not only useful to compare substrates, but can help map the non-trivial relationship between properties and task performance. In the wider context, the framework offers a greater understanding as to what makes a dynamical system compute, helping improve the design of future substrates for RC.


Author(s):  
Felix Köster ◽  
Dominik Ehlert ◽  
Kathy Lüdge

Abstract We analyse the memory capacity of a delay-based reservoir computer with a Hopf normal form as nonlinearity and numerically compute the linear as well as the higher order recall capabilities. A possible physical realization could be a laser with external cavity, for which the information is fed via electrical injection. A task-independent quantification of the computational capability of the reservoir system is done via a complete orthonormal set of basis functions. Our results suggest that even for constant readout dimension the total memory capacity is dependent on the ratio between the information input period, also called the clock cycle, and the time delay in the system. Optimal performance is found for a time delay about 1.6 times the clock cycle.


Author(s):  
Sandrine Boumard ◽  
Mika Lasanen ◽  
Olli Apilo ◽  
Atso Hekkala ◽  
Cedric Cassan ◽  
...  

2016 ◽  
Vol 4 (4) ◽  
pp. 118-121
Author(s):  
Pankaj Prajapati ◽  
Dr. Shyam Akashe

In the beginning of the last decade, battery-powered hand-held devices such as mobile phones and laptop computers emerged. For that application we have to design a device which will consume minimum amount of energy. For that reason in this article we focused on power consumption and how to calculate the power. In this paper, an analysis of different delay lines based on CMOS architecture has been done. The effect of supply voltage on digital delay lines has been analysed as how supply voltage affected the value of power consumption of the digital delay line. After the analysis of those performance parameters, the trade-off has been made for better performance of delay lines.


Photonics ◽  
2019 ◽  
Vol 6 (4) ◽  
pp. 124 ◽  
Author(s):  
Krishan Harkhoe ◽  
Guy Van der Sande

Reservoir computing has rekindled neuromorphic computing in photonics. One of the simplest technological implementations of reservoir computing consists of a semiconductor laser with delayed optical feedback. In this delay-based scheme, virtual nodes are distributed in time with a certain node distance and form a time-multiplexed network. The information processing performance of a semiconductor laser-based reservoir computing (RC) system is usually analysed by way of testing the laser-based reservoir computer on specific benchmark tasks. In this work, we will illustrate the optimal performance of the system on a chaotic time-series prediction benchmark. However, the goal is to analyse the reservoir’s performance in a task-independent way. This is done by calculating the computational capacity, a measure for the total number of independent calculations that the system can handle. We focus on the dependence of the computational capacity on the specifics of the masking procedure. We find that the computational capacity depends strongly on the virtual node distance with an optimal node spacing of 30 ps. In addition, we show that the computational capacity can be further increased by allowing for a well chosen mismatch between delay and input data sample time.


Sign in / Sign up

Export Citation Format

Share Document