Discriminative and Robust Feature Learning for MIBCI-based Disability Rehabilitation

Author(s):  
Xiuyu Huang ◽  
Nan Zhou ◽  
KupSze Choi

Abstract BackgroundIn the past few years, motor imagery brain-computer interface (MIBCI) has become a valuable assisting technology for the disabled. However, how to effectively improve the motor imagery (MI) classification performance by learning discriminative and robust features is still a challenging problem.MethodsIn this study, we propose a novel loss function, called correntropy-based center loss (CCL), as the supervision signal for the training of the convolutional neural network (CNN) model in the MI classification task. With joint supervision of the softmax loss and CCL, we can train a CNN model to acquire deep discriminative features with large inter-class dispersion and slight intra-class variation. Moreover, the CCL can also effectively decrease the negative effect of the noise during the training, which is essential to accurate MI classification.ResultsWe perform extensive experiments on two well-known public MI datasets, called BCI competition IV-2a and IV-2b, to demonstrate the effectiveness of the proposed loss. The result shows that our CNNs (with such joint supervision) achieve 78.65% and 86.10% on IV-2a and IV-2b and outperform other baseline approaches.ConclusionThe proposed CCL helps the learning process of the CNN model to obtain both discriminative and robust deeply learned features for the MI classification task in the BCI rehabilitation application.

Entropy ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. 1199 ◽  
Author(s):  
Hyeon Kyu Lee ◽  
Young-Seok Choi

The motor imagery-based brain-computer interface (BCI) using electroencephalography (EEG) has been receiving attention from neural engineering researchers and is being applied to various rehabilitation applications. However, the performance degradation caused by motor imagery EEG with very low single-to-noise ratio faces several application issues with the use of a BCI system. In this paper, we propose a novel motor imagery classification scheme based on the continuous wavelet transform and the convolutional neural network. Continuous wavelet transform with three mother wavelets is used to capture a highly informative EEG image by combining time-frequency and electrode location. A convolutional neural network is then designed to both classify motor imagery tasks and reduce computation complexity. The proposed method was validated using two public BCI datasets, BCI competition IV dataset 2b and BCI competition II dataset III. The proposed methods were found to achieve improved classification performance compared with the existing methods, thus showcasing the feasibility of motor imagery BCI.


2019 ◽  
Vol 29 (01) ◽  
pp. 1850014 ◽  
Author(s):  
Marie-Constance Corsi ◽  
Mario Chavez ◽  
Denis Schwartz ◽  
Laurent Hugueville ◽  
Ankit N. Khambhati ◽  
...  

We adopted a fusion approach that combines features from simultaneously recorded electroencephalogram (EEG) and magnetoencephalogram (MEG) signals to improve classification performances in motor imagery-based brain–computer interfaces (BCIs). We applied our approach to a group of 15 healthy subjects and found a significant classification performance enhancement as compared to standard single-modality approaches in the alpha and beta bands. Taken together, our findings demonstrate the advantage of considering multimodal approaches as complementary tools for improving the impact of noninvasive BCIs.


2021 ◽  
Vol 15 ◽  
Author(s):  
Sangin Park ◽  
Jihyeon Ha ◽  
Da-Hye Kim ◽  
Laehyun Kim

The motor imagery (MI)-based brain-computer interface (BCI) is an intuitive interface that provides control over computer applications directly from brain activity. However, it has shown poor performance compared to other BCI systems such as P300 and SSVEP BCI. Thus, this study aimed to improve MI-BCI performance by training participants in MI with the help of sensory inputs from tangible objects (i.e., hard and rough balls), with a focus on poorly performing users. The proposed method is a hybrid of training and imagery, combining motor execution and somatosensory sensation from a ball-type stimulus. Fourteen healthy participants participated in the somatosensory-motor imagery (SMI) experiments (within-subject design) involving EEG data classification with a three-class system (signaling with left hand, right hand, or right foot). In the scenario of controlling a remote robot to move it to the target point, the participants performed MI when faced with a three-way intersection. The SMI condition had a better classification performance than did the MI condition, achieving a 68.88% classification performance averaged over all participants, which was 6.59% larger than that in the MI condition (p < 0.05). In poor performers, the classification performance in SMI was 10.73% larger than in the MI condition (62.18% vs. 51.45%). However, good performers showed a slight performance decrement (0.86%) in the SMI condition compared to the MI condition (80.93% vs. 81.79%). Combining the brain signals from the motor and somatosensory cortex, the proposed hybrid MI-BCI system demonstrated improved classification performance, this phenomenon was predominant in poor performers (eight out of nine subjects). Hybrid MI-BCI systems may significantly contribute to reducing the proportion of BCI-inefficiency users and closing the performance gap with other BCI systems.


Author(s):  
Sangram Redkar

<table class="Heading1Char" width="593" border="0" cellspacing="0" cellpadding="0"><tbody><tr><td valign="top" width="387"><p>In this paper, several techniques used to perform EEG signal pre-processing, feature extraction and signal classification have been discussed, implemented, validated and verified; efficient supervised and unsupervised machine learning models, for the EEG motor imagery classification are identified. Brain Computer Interfaces are becoming the next generation controllers not only in the medical devices for disabled individuals but also in the gaming and entertainment industries. In order to build an effective Brain Computer Interface, it is important to have robust signal processing and machine learning modules which operate on the EEG signals and estimate the current thought or intent of the user. Motor Imagery (imaginary hand and leg movements) signals are acquired using the Emotiv EEG headset. The signal have been extracted and supplied to the machine learning (ML) stage, wherein, several ML techniques are applied and validated. The performances of various ML techniques are compared and some important observations are reported. Further, Deep Learning techniques like autoencoding have been used to perform unsupervised feature learning. The reliability of the features is presented and analyzed by performing classification by using the ML techniques. It is shown that hand engineered ‘ad-hoc’ feature extraction techniques are less reliable than the automated (‘Deep Learning’) feature learning techniques. All the findings in this research, can be used by the BCI research community for building motor imagery based BCI applications such as Gaming, Robot control and autonomous vehicles.</p></td></tr></tbody></table>


2020 ◽  
Vol 5 (2) ◽  
pp. 85-92
Author(s):  
Adi Wijaya ◽  
Teguh Bharata Adji ◽  
Noor Akhmad Setiawan

The multi-class motor imagery based on Electroencephalogram (EEG) signals in Brain-Computer Interface (BCI) systems still face challenges, such as inconsistent accuracy and low classification performance due to inter-subject dependent. Therefore, this study aims to improve multi-class EEG-motor imagery using two-stage detection and voting scheme on one-versus-one approach. The EEG signal used to carry out this research was extracted through a statistical measure of narrow window sliding. Furthermore, inter and cross-subject schemes were investigated on BCI competition IV-Dataset 2a to evaluate the effectiveness of the proposed method. The experimental results showed that the proposed method produced enhanced inter and cross-subject kappa coefficient values of 0.78 and 0.68, respectively, with a low standard deviation of 0.1 for both schemes. These results further indicated that the proposed method has an ability to address inter-subject dependent for promising and reliable BCI systems.


2013 ◽  
Vol 133 (3) ◽  
pp. 635-641
Author(s):  
Genzo Naito ◽  
Lui Yoshida ◽  
Takashi Numata ◽  
Yutaro Ogawa ◽  
Kiyoshi Kotani ◽  
...  

Author(s):  
Inzamam Mashood Nasir ◽  
Muhammad Rashid ◽  
Jamal Hussain Shah ◽  
Muhammad Sharif ◽  
Muhammad Yahiya Haider Awan ◽  
...  

Background: Breast cancer is considered as the most perilous sickness among females worldwide and the ratio of new cases is expanding yearly. Many researchers have proposed efficient algorithms to diagnose breast cancer at early stages, which have increased the efficiency and performance by utilizing the learned features of gold standard histopathological images. Objective: Most of these systems have either used traditional handcrafted features or deep features which had a lot of noise and redundancy, which ultimately decrease the performance of the system. Methods: A hybrid approach is proposed by fusing and optimizing the properties of handcrafted and deep features to classify the breast cancer images. HOG and LBP features are serially fused with pretrained models VGG19 and InceptionV3. PCR and ICR are used to evaluate the classification performance of proposed method. Results: The method concentrates on histopathological images to classify the breast cancer. The performance is compared with state-of-the-art techniques, where an overall patient-level accuracy of 97.2% and image-level accuracy of 96.7% is recorded. Conclusion: The proposed hybrid method achieves the best performance as compared to previous methods and it can be used for the intelligent healthcare systems and early breast cancer detection.


Sign in / Sign up

Export Citation Format

Share Document