Stochastic Data-driven Hardware Resilience to Efficiently Train Inference Models for Stochastic Hardware Implementations

Author(s):  
Bonan Zhang ◽  
Lung-Yen Chen ◽  
Naveen Verma
Materials ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 2875
Author(s):  
Xiaoxin Lu ◽  
Julien Yvonnet ◽  
Leonidas Papadopoulos ◽  
Ioannis Kalogeris ◽  
Vissarion Papadopoulos

A stochastic data-driven multilevel finite-element (FE2) method is introduced for random nonlinear multiscale calculations. A hybrid neural-network–interpolation (NN–I) scheme is proposed to construct a surrogate model of the macroscopic nonlinear constitutive law from representative-volume-element calculations, whose results are used as input data. Then, a FE2 method replacing the nonlinear multiscale calculations by the NN–I is developed. The NN–I scheme improved the accuracy of the neural-network surrogate model when insufficient data were available. Due to the achieved reduction in computational time, which was several orders of magnitude less than that to direct FE2, the use of such a machine-learning method is demonstrated for performing Monte Carlo simulations in nonlinear heterogeneous structures and propagating uncertainties in this context, and the identification of probabilistic models at the macroscale on some quantities of interest. Applications to nonlinear electric conduction in graphene–polymer composites are presented.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Spyros Stathopoulos ◽  
Loukas Michalas ◽  
Ali Khiat ◽  
Alexantrou Serb ◽  
Themis Prodromakis

AbstractThe emergence of memristor technologies brings new prospects for modern electronics via enabling novel in-memory computing solutions and energy-efficient and scalable reconfigurable hardware implementations. Several competing memristor technologies have been presented with each bearing distinct performance metrics across multi-bit memory capacity, low-power operation, endurance, retention and stability. Application needs however are constantly driving the push towards higher performance, which necessitates the introduction of a standard benchmarking procedure for fair evaluation across distinct key metrics. Here we present an electrical characterisation methodology that amalgamates several testing protocols in an appropriate sequence adapted for memristors benchmarking needs, in a technology-agnostic manner. Our approach is designed to extract information on all aspects of device behaviour, ranging from deciphering underlying physical mechanisms to assessing different aspects of electrical performance and even generating data-driven device-specific models. Importantly, it relies solely on standard electrical characterisation instrumentation that is accessible in most electronics laboratories and can thus serve as an independent tool for understanding and designing new memristive device technologies.


2020 ◽  
Vol 139 ◽  
pp. 106844 ◽  
Author(s):  
Eric Bradford ◽  
Lars Imsland ◽  
Dongda Zhang ◽  
Ehecatl Antonio del Rio Chanona

2021 ◽  
Vol 14 (4) ◽  
pp. 1-23
Author(s):  
José Romero Hung ◽  
Chao Li ◽  
Pengyu Wang ◽  
Chuanming Shao ◽  
Jinyang Guo ◽  
...  

ACE-GCN is a fast and resource/energy-efficient FPGA accelerator for graph convolutional embedding under data-driven and in-place processing conditions. Our accelerator exploits the inherent power law distribution and high sparsity commonly exhibited by real-world graphs datasets. Contrary to other hardware implementations of GCN, on which traditional optimization techniques are employed to bypass the problem of dataset sparsity, our architecture is designed to take advantage of this very same situation. We propose and implement an innovative acceleration approach supported by our “implicit-processing-by-association” concept, in conjunction with a dataset-customized convolutional operator. The computational relief and consequential acceleration effect arise from the possibility of replacing rather complex convolutional operations for a faster embedding result estimation. Based on a computationally inexpensive and super-expedited similarity calculation, our accelerator is able to decide from the automatic embedding estimation or the unavoidable direct convolution operation. Evaluations demonstrate that our approach presents excellent applicability and competitive acceleration value. Depending on the dataset and efficiency level at the target, between 23× and 4,930× PyG baseline, coming close to AWB-GCN by 46% to 81% on smaller datasets and noticeable surpassing AWB-GCN for larger datasets and with controllable accuracy loss levels. We further demonstrate the unique hardware optimization characteristics of our approach and discuss its multi-processing potentiality.


Sign in / Sign up

Export Citation Format

Share Document