Large Neural Network Simulations on Multiple Hardware Platforms

1997 ◽  
pp. 919-923 ◽  
Author(s):  
Per Hammarlund ◽  
Örjan Ekeberg ◽  
Tomas Wilhelmsson ◽  
Anders Lansner
2021 ◽  
Vol 15 ◽  
Author(s):  
Wooseok Choi ◽  
Myonghoon Kwak ◽  
Seyoung Kim ◽  
Hyunsang Hwang

Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 33-33
Author(s):  
G M Wallis ◽  
H H Bülthoff

The view-based approach to object recognition supposes that objects are stored as a series of associated views. Although representation of these views as combinations of 2-D features allows generalisation to similar views, it remains unclear how very different views might be associated together to allow recognition from any viewpoint. One cue present in the real world other than spatial similarity, is that we usually experience different objects in temporally constrained, coherent order, and not as randomly ordered snapshots. In a series of recent neural-network simulations, Wallis and Baddeley (1997 Neural Computation9 883 – 894) describe how the association of views on the basis of temporal as well as spatial correlations is both theoretically advantageous and biologically plausible. We describe an experiment aimed at testing their hypothesis in human object-recognition learning. We investigated recognition performance of faces previously presented in sequences. These sequences consisted of five views of five different people's faces, presented in orderly sequence from left to right profile in 45° steps. According to the temporal-association hypothesis, the visual system should associate the images together and represent them as different views of the same person's face, although in truth they are images of different people's faces. In a same/different task, subjects were asked to say whether two faces seen from different viewpoints were views of the same person or not. In accordance with theory, discrimination errors increased for those faces seen earlier in the same sequence as compared with those faces which were not ( p<0.05).


2019 ◽  
Vol 5 (12) ◽  
pp. eaay6946 ◽  
Author(s):  
Tyler W. Hughes ◽  
Ian A. D. Williamson ◽  
Momchil Minkov ◽  
Shanhui Fan

Analog machine learning hardware platforms promise to be faster and more energy efficient than their digital counterparts. Wave physics, as found in acoustics and optics, is a natural candidate for building analog processors for time-varying signals. Here, we identify a mapping between the dynamics of wave physics and the computation in recurrent neural networks. This mapping indicates that physical wave systems can be trained to learn complex features in temporal data, using standard training techniques for neural networks. As a demonstration, we show that an inverse-designed inhomogeneous medium can perform vowel classification on raw audio signals as their waveforms scatter and propagate through it, achieving performance comparable to a standard digital implementation of a recurrent neural network. These findings pave the way for a new class of analog machine learning platforms, capable of fast and efficient processing of information in its native domain.


1994 ◽  
Vol 02 (03) ◽  
pp. 385-399 ◽  
Author(s):  
A.J. O’TOOLE ◽  
T.H.T. NGUYEN

The purpose of the present paper was to implement two computational models of structure-from-stereopsis comparing the computational utility of interval and distributed codings of image disparity. Interval codings represent disparity in discrete ranges, whereas distributed codings represent disparity using overlapping tuning curves similar to the response profiles of neurons found in the visual cortex of primates. We relate this also to recent neurophysiological work indicating that binocular neurons may be tuned to the relative phase of luminance input to the left and right eyes. A distributed coding of disparity has been shown previously to provide a good model for human stereoacuity. In this paper we show that it is also advantageous for solving the structure-from-stereopsis problem. Separate sets of neural network simulations were undertaken to learn a mapping function from (1) interval and (2) distributed disparity codings to a three-dimensional surface representation. Example pairs of disparity codings and surface representations were used to adjust the weights on a radial basis function network to learn the mapping function for the example surfaces. The generalizability of the learned mapping function was tested with novel surfaces. While the networks performed equally well (perfectly) on the learned surfaces, the distributed disparity coding networks generalized better to novel surfaces than did the interval code.


Sign in / Sign up

Export Citation Format

Share Document