A Study on Hardware Architecture to Improve Inference Accuracy of Convolutional Neural Networks

2021 ◽  
Vol 19 (12) ◽  
pp. 63-70
Author(s):  
Seong-Ho Jeon ◽  
Seung-Ho Ok
2021 ◽  
Vol 17 (2) ◽  
pp. 1-21
Author(s):  
Mohit Khatwani ◽  
Hasib-Al Rashid ◽  
Hirenkumar Paneliya ◽  
Mark Horton ◽  
Nicholas Waytowich ◽  
...  

This article presents an energy-efficient and flexible multichannel Electroencephalogram (EEG) artifact identification network and its hardware using depthwise and separable convolutional neural networks. EEG signals are recordings of the brain activities. EEG recordings that are not originated from cerebral activities are termed artifacts . Our proposed model does not need expert knowledge for feature extraction or pre-processing of EEG data and has a very efficient architecture implementable on mobile devices. The proposed network can be reconfigured for any number of EEG channel and artifact classes. Experiments were done with the proposed model with the goal of maximizing the identification accuracy while minimizing the weight parameters and required number of operations. Our proposed network achieves 93.14% classification accuracy using an EEG dataset collected by 64-channel BioSemi ActiveTwo headsets, averaged across 17 patients and 10 artifact classes. Our hardware architecture is fully parameterized with number of input channels, filters, depth, and data bit-width. The number of processing engines (PE) in the proposed hardware can vary between 1 to 16, providing different latency, throughput, power, and energy efficiency measurements. We implement our custom hardware architecture on Xilinx FPGA (Artix-7), which on average consumes 1.4 to 4.7 mJ dynamic energy with different PE configurations. Energy consumption is further reduced by 16.7× implementing on application-specified integrated circuit at the post layout level in 65-nm CMOS technology. Our FPGA implementation is 1.7 × to 5.15 × higher in energy efficiency than some previous works. Moreover, our Application-Specified Integrated Circuit implementation is also 8.47 × to 25.79 × higher in energy efficiency compared to previous works. We also demonstrated that the proposed network is reconfigurable to detect artifacts from another EEG dataset collected in our lab by a 14-channel Emotiv EPOC+ headset and achieved 93.5% accuracy for eye blink artifact detection.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document