EEG Classification by Minimalistic Convolutional Neural Network Utilizing Context Information

Author(s):  
Jakub Sebek ◽  
Hana Schaabova ◽  
Vladimir Krajca
Author(s):  
Qi Xin ◽  
Shaohao Hu ◽  
Shuaiqi Liu ◽  
Ling Zhao ◽  
Shuihua Wang

As one of the important tools of epilepsy diagnosis, the electroencephalogram (EEG) is noninvasive and presents no traumatic injury to patients. It contains a lot of physiological and pathological information that is easy to obtain. The automatic classification of epileptic EEG is important in the diagnosis and therapeutic efficacy of epileptics. In this article, an explainable graph feature convolutional neural network named WTRPNet is proposed for epileptic EEG classification. Since WTRPNet is constructed by a recurrence plot in the wavelet domain, it can fully obtain the graph feature of the EEG signal, which is established by an explainable graph features extracted layer called WTRP block . The proposed method shows superior performance over state-of-the-art methods. Experimental results show that our algorithm has achieved an accuracy of 99.67% in classification of focal and nonfocal epileptic EEG, which proves the effectiveness of the classification and detection of epileptic EEG.


2021 ◽  
pp. 1-13
Author(s):  
Hongshi Ou ◽  
Jifeng Sun

In the deep learning-based video action recognitio, the function of the neural network is to acquire spatial information, motion information, and the associated information of the above two kinds of information over an uneven time span. This paper puts forward a network extracting video sequence semantic information based on deep integration of local Spatial-Temporal information. The network uses 2D Convolutional Neural Network (2DCNN) and Multi Spatial-Temporal scale 3D Convolutional Neural Network (MST_3DCNN) respectively to extract spatial information and motion information. Spatial information and motion information of the same time quantum receive 3D convolutional integration to generate the temporary Spatial-Temporal information of a certain moment. Then, the Spatial-Temporal information of multiple single moments enters Temporal Pyramid Net (TPN) to generate the local Spatial-Temporal information of multiple time scales. Finally, bidirectional recurrent neutral network is used to act on the Spatial-Temporal information of all parts so as to acquire the context information spanning the length of the entire video, which endows the network with video context information extraction capability. Through the experiments on the three video action recognitio common experimental data sets UCF101, UCF11, UCFSports, the Spatial-Temporal information deep fusion network proposed in this paper has a high correct recognition rate in the task of video action recognitio.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 66941-66950 ◽  
Author(s):  
Ji-Hoon Jeong ◽  
Byeong-Hoo Lee ◽  
Dae-Hyeok Lee ◽  
Yong-Deok Yun ◽  
Seong-Whan Lee

2018 ◽  
Vol 295 ◽  
pp. 46-58 ◽  
Author(s):  
Yanna Wang ◽  
Cunzhao Shi ◽  
Baihua Xiao ◽  
Chunheng Wang ◽  
Chengzuo Qi

Sign in / Sign up

Export Citation Format

Share Document