computational memory
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 19)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 24 (5) ◽  
pp. 1356-1379
Author(s):  
Daegeun Yoon ◽  
Donghyun You

Abstract A fractional derivative is a temporally nonlocal operation which is computationally intensive due to inclusion of the accumulated contribution of function values at past times. In order to lessen the computational load while maintaining the accuracy of the fractional derivative, a novel numerical method for the Caputo fractional derivative is proposed. The present adaptive memory method significantly reduces the requirement for computational memory for storing function values at past time points and also significantly improves the accuracy by calculating convolution weights to function values at past time points which can be non-uniformly distributed in time. The superior accuracy of the present method to the accuracy of the previously reported methods is identified by deriving numerical errors analytically. The sub-diffusion process of a time-fractional diffusion equation is simulated to demonstrate the accuracy as well as the computational efficiency of the present method.


Author(s):  
Yinuo Shi ◽  
Kequn Chi ◽  
Zhou Li ◽  
Wenbiao Zhang ◽  
Xiang Feng ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Geethan Karunaratne ◽  
Manuel Schmuck ◽  
Manuel Le Gallo ◽  
Giovanni Cherubini ◽  
Luca Benini ◽  
...  

AbstractTraditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer architecture. To overcome this bottleneck, we propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy. This is achieved by a content-based attention mechanism that represents unrelated items in the computational memory with uncorrelated HD vectors, whose real-valued components can be readily approximated by binary, or bipolar components. Experimental results demonstrate the efficacy of our approach on few-shot image classification tasks on the Omniglot dataset using more than 256,000 phase-change memory devices. Our approach effectively merges the richness of deep neural network representations with HD computing that paves the way for robust vector-symbolic manipulations applicable in reasoning, fusion, and compression.


2021 ◽  
Vol 36 (2) ◽  
pp. 213-217
Author(s):  
Min Zhu

In this paper, a novel high-order method, Runge-Kutta Sinc (RK-Sinc), is proposed. The RK-Sinc scheme employs the strong stability preserving Runge-Kutta (SSP-RK) algorithm to substitute time derivative and the Sinc function to replace spatial derivates. The computational efficiency, numerical dispersion and convergence of the RK-Sinc algorithm are addressed. The proposed method presents the better numerical dispersion and the faster convergence rate both in time and space domain. It is found that the computational memory of the RK-Sinc is more than two times of the FDTD for the same stencil size. Compared with the conventional FDTD, the new scheme provides more accuracy and great potential in computational electromagnetic field.


2020 ◽  
Vol 28 (11) ◽  
pp. 2370-2382
Author(s):  
Khaled Alhaj Ali ◽  
Mostafa Rizk ◽  
Amer Baghdadi ◽  
Jean-Philippe Diguet ◽  
Jalal Jomaah ◽  
...  
Keyword(s):  

2020 ◽  
Author(s):  
Ada Aka ◽  
Sudeep Bhatia

Memory is a crucial component of everyday decision making, yet little is known about how memory and choice processes interact, and whether or not established memory regularities persist during memory-based decision making. In this paper, we introduce a novel experimental paradigm to study the differences between memory processes at play in standard list recall versus in preferential choice. Using computational memory models, fit to data from two pre-registered experiments, we find that some established memory regularities (primacy, recency, semantic clustering) emerge in preferential choice, whereas others (temporal clustering) are significantly weakened relative to standard list recall. Notably, decision-relevant features, such as item desirability, play a stronger role in guiding retrieval in choice. Our results suggest memory processes differ across preferential choice and standard memory tasks, and that choice modulates memory by differentially activating decision-relevant features such as what we like.


2020 ◽  
Vol 14 ◽  
Author(s):  
S. R. Nandakumar ◽  
Manuel Le Gallo ◽  
Christophe Piveteau ◽  
Vinay Joshi ◽  
Giovanni Mariani ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document