The First Testbed Demonstration of Cognitive End-to-End Optical Service Provisioning with Hierarchical Learning across Multiple Autonomous Systems

Author(s):  
Gengchen Liu ◽  
Kaiqi Zhang ◽  
Xiaoliang Chen ◽  
Hongbo Lu ◽  
Jiannan Guo ◽  
...  
2019 ◽  
Vol 37 (1) ◽  
pp. 218-225 ◽  
Author(s):  
Gengchen Liu ◽  
Kaiqi Zhang ◽  
Xiaoliang Chen ◽  
Hongbo Lu ◽  
Jiannan Guo ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Gopalakrishnan Srinivasan ◽  
Kaushik Roy

Spiking neural networks (SNNs), with their inherent capability to learn sparse spike-based input representations over time, offer a promising solution for enabling the next generation of intelligent autonomous systems. Nevertheless, end-to-end training of deep SNNs is both compute- and memory-intensive because of the need to backpropagate error gradients through time. We propose BlocTrain, which is a scalable and complexity-aware incremental algorithm for memory-efficient training of deep SNNs. We divide a deep SNN into blocks, where each block consists of few convolutional layers followed by a classifier. We train the blocks sequentially using local errors from the classifier. Once a given block is trained, our algorithm dynamically figures out easy vs. hard classes using the class-wise accuracy, and trains the deeper block only on the hard class inputs. In addition, we also incorporate a hard class detector (HCD) per block that is used during inference to exit early for the easy class inputs and activate the deeper blocks only for the hard class inputs. We trained ResNet-9 SNN divided into three blocks, using BlocTrain, on CIFAR-10 and obtained 86.4% accuracy, which is achieved with up to 2.95× lower memory requirement during the course of training, and 1.89× compute efficiency per inference (due to early exit strategy) with 1.45× memory overhead (primarily due to classifier weights) compared to end-to-end network. We also trained ResNet-11, divided into four blocks, on CIFAR-100 and obtained 58.21% accuracy, which is one of the first reported accuracy for SNN trained entirely with spike-based backpropagation on CIFAR-100.


Sign in / Sign up

Export Citation Format

Share Document