Reconstruction of sparse connectivity in neural networks from spike train covariances

2013 ◽  
Vol 2013 (03) ◽  
pp. P03008 ◽  
Author(s):  
Volker Pernice ◽  
Stefan Rotter
Author(s):  
Jörg Bornschein

An FPGA-based coprocessor has been implemented which simulates the dynamics of a large recurrent neural network composed of binary neurons. The design has been used for unsupervised learning of receptive fields. Since the number of neurons to be simulated (>104) exceeds the available FPGA logic capacity for direct implementation, a set of streaming processors has been designed. Given the state- and activity vectors of the neurons at time t and a sparse connectivity matrix, these streaming processors calculate the state- and activity vectors for time t + 1. The operation implemented by the streaming processors can be understood as a generalized form of a sparse matrix vector product (SpMxV). The largest dataset, the sparse connectivity matrix, is stored and processed in a compressed format to better utilize the available memory bandwidth.


2018 ◽  
Vol 9 (1) ◽  
Author(s):  
Decebal Constantin Mocanu ◽  
Elena Mocanu ◽  
Peter Stone ◽  
Phuong H. Nguyen ◽  
Madeleine Gibescu ◽  
...  

Author(s):  
Xiaopeng Li ◽  
Zhourong Chen ◽  
Nevin L. Zhang

Sparse connectivity is an important factor behind the success of convolutional neural networks and recurrent neural networks. In this paper, we consider the problem of learning sparse connectivity for feedforward neural networks (FNNs). The key idea is that a unit should be connected to a small number of units at the next level below that are strongly correlated. We use Chow-Liu's algorithm to learn a tree-structured probabilistic model for the units at the current level, use the tree to identify subsets of units that are strongly correlated, and introduce a new unit with receptive field over the subsets. The procedure is repeated on the new units to build multiple layers of hidden units. The resulting model is called a TRF-net. Empirical results show that, when compared to dense FNNs, TRF-net achieves better or comparable classification performance with much fewer parameters and sparser structures. They are also more interpretable.


Sign in / Sign up

Export Citation Format

Share Document