Knowledge-driven feature component interpretable network for motor imagery classification
Abstract Objective. End-to-end convolution neural network (CNN) has achieved great success in motor imagery classification without manual feature design. However, all the existing deep network solutions are purely data-driven and lack interpretability, which makes it impossible to discover insightful knowledge from the learnt features, not to mention to design specific network structure. The heavy computational cost of CNN also makes it challenging for real time application along with high classification performance. Approach. To address these problems, a novel Knowledge-driven Feature Component Interpretable Network (KFCNet) was proposed, which combines spatial and temporal convolution in analogy to ICA and power spectrum pipeline. Prior frequency band knowledge of sensory motor rhythms (SMR) has been formulated as band-pass linear-phase digit FIR filters to initialize the temporal convolution kernels to enable knowledge driven mechanism. To avoid signal distortion and achieve linear phase and unimodality of filters, a symmetry loss is proposed, which is used in combination with the cross-entropy classification loss for training. Besides the general prior knowledge, subject specific time-frequency property of ERDS (event-related desynchronization and synchronization) has been employed to construct and initialize the network with significantly fewer parameters. Main results. Comparison experiments on two public datasets have been performed. Interpretable feature components could be observed in the trained model. The physically meaningful observation could efficiently assist the network structure design. Excellent classification performance on motor imagery has been obtained. Significance. The performance of KFCNet is comparative to the state-of-the-art methods but with much fewer parameters and makes real time application possible.