scholarly journals Adversarial Training of Neural Encoding Models on Population Spike Trains

2019 ◽  
Author(s):  
Poornima Ramesh ◽  
Mohamad Atayi ◽  
Jakob H Macke
2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shiva Subbulakshmi Radhakrishnan ◽  
Amritanand Sebastian ◽  
Aaryan Oberoi ◽  
Sarbashis Das ◽  
Saptarshi Das

AbstractSpiking neural networks (SNNs) promise to bridge the gap between artificial neural networks (ANNs) and biological neural networks (BNNs) by exploiting biologically plausible neurons that offer faster inference, lower energy expenditure, and event-driven information processing capabilities. However, implementation of SNNs in future neuromorphic hardware requires hardware encoders analogous to the sensory neurons, which convert external/internal stimulus into spike trains based on specific neural algorithm along with inherent stochasticity. Unfortunately, conventional solid-state transducers are inadequate for this purpose necessitating the development of neural encoders to serve the growing need of neuromorphic computing. Here, we demonstrate a biomimetic device based on a dual gated MoS2 field effect transistor (FET) capable of encoding analog signals into stochastic spike trains following various neural encoding algorithms such as rate-based encoding, spike timing-based encoding, and spike count-based encoding. Two important aspects of neural encoding, namely, dynamic range and encoding precision are also captured in our demonstration. Furthermore, the encoding energy was found to be as frugal as ≈1–5 pJ/spike. Finally, we show fast (≈200 timesteps) encoding of the MNIST data set using our biomimetic device followed by more than 91% accurate inference using a trained SNN.


2016 ◽  
Vol 12 (11) ◽  
pp. e1005189 ◽  
Author(s):  
Arno Onken ◽  
Jian K. Liu ◽  
P. P. Chamanthi R. Karunasekara ◽  
Ioannis Delis ◽  
Tim Gollisch ◽  
...  

2018 ◽  
Author(s):  
Lea Duncker ◽  
Maneesh Sahani

AbstractWe introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation. Our approach is based on shared latent Gaussian processes (GPs) which are combined linearly, as in the Gaussian Process Factor Analysis (GPFA) algorithm. We extend GPFA to handle unbinned spike-train data by incorporating a continuous time point-process likelihood model, achieving scalability with a sparse variational approximation. Shared variability is separated into terms that express condition dependence, as well as trial-to-trial variation in trajectories. Finally, we introduce a nested GP formulation to capture variability in the rate of evolution along the trajectory. We show that the new method learns to recover latent trajectories in synthetic data, and can accurately identify the trial-to-trial timing of movement-related parameters from motor cortical data without any supervision.


Sign in / Sign up

Export Citation Format

Share Document