scholarly journals Spiking Neural Network for Augmenting Electroencephalographic Data for Brain Computer Interfaces

2021 ◽  
Vol 15 ◽  
Author(s):  
Sai Kalyan Ranga Singanamalla ◽  
Chin-Teng Lin

With the advent of advanced machine learning methods, the performance of brain–computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance.

2021 ◽  
Vol 15 ◽  
Author(s):  
Wonjun Ko ◽  
Eunjin Jeon ◽  
Seungwoo Jeong ◽  
Jaeun Phyo ◽  
Heung-Il Suk

Brain–computer interfaces (BCIs) utilizing machine learning techniques are an emerging technology that enables a communication pathway between a user and an external system, such as a computer. Owing to its practicality, electroencephalography (EEG) is one of the most widely used measurements for BCI. However, EEG has complex patterns and EEG-based BCIs mostly involve a cost/time-consuming calibration phase; thus, acquiring sufficient EEG data is rarely possible. Recently, deep learning (DL) has had a theoretical/practical impact on BCI research because of its use in learning representations of complex patterns inherent in EEG. Moreover, algorithmic advances in DL facilitate short/zero-calibration in BCI, thereby suppressing the data acquisition phase. Those advancements include data augmentation (DA), increasing the number of training samples without acquiring additional data, and transfer learning (TL), taking advantage of representative knowledge obtained from one dataset to address the so-called data insufficiency problem in other datasets. In this study, we review DL-based short/zero-calibration methods for BCI. Further, we elaborate methodological/algorithmic trends, highlight intriguing approaches in the literature, and discuss directions for further research. In particular, we search for generative model-based and geometric manipulation-based DA methods. Additionally, we categorize TL techniques in DL-based BCIs into explicit and implicit methods. Our systematization reveals advances in the DA and TL methods. Among the studies reviewed herein, ~45% of DA studies used generative model-based techniques, whereas ~45% of TL studies used explicit knowledge transferring strategy. Moreover, based on our literature review, we recommend an appropriate DA strategy for DL-based BCIs and discuss trends of TLs used in DL-based BCIs.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 162218-162229
Author(s):  
Fotis P. Kalaganis ◽  
Nikolaos A. Laskaris ◽  
Elisavet Chatzilari ◽  
Spiros Nikolopoulos ◽  
Ioannis Kompatsiaris

2020 ◽  
Vol 20 (1) ◽  
pp. 29
Author(s):  
R. Sandra Yuwana ◽  
Fani Fauziah ◽  
Ana Heryana ◽  
Dikdik Krisnandi ◽  
R. Budiarianto Suryo Kusumo ◽  
...  

Deep learning technology has a better result when trained using an abundant amount of data. However, collecting such data is expensive and time consuming.  On the other hand, limited data often be the inevitable condition. To increase the number of data, data augmentation is usually implemented.  By using it, the original data are transformed, by rotating, shifting, or both, to generate new data artificially. In this paper, generative adversarial networks (GAN) and deep convolutional GAN (DCGAN) are used for data augmentation. Both approaches are applied for diseases detection. The performance of the tea diseases detection on the augmented data is evaluated using various deep convolutional neural network (DCNN) including AlexNet, DenseNet, ResNet, and Xception.  The experimental results indicate that the highest GAN accuracy is obtained by DenseNet architecture, which is 88.84%, baselines accuracy on the same architecture is 86.30%. The results of DCGAN accuracy on the use of the same architecture show a similar trend, which is 88.86%. 


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2021 ◽  
Vol 4 (3) ◽  
pp. 23-29
Author(s):  
Areej H. Al-Anbary ◽  
Salih M. Al-Qaraawi ‎

Recently, algorithms of machine learning are widely used with the field of electroencephalography (EEG)-Brain-Computer interfaces (BCI). In this paper, a sign language software model based on the EEG brain signal was implemented, to help the speechless persons to communicate their thoughts to others.  The preprocessing stage for the EEG signals was performed by applying the Principle Component Analysis (PCA) algorithm to extract the important features and reducing the data redundancy. A model for classifying ten classes of EEG signals, including  Facial Expression(FE) and some Motor Execution(ME) processes, had been designed. A neural network of three hidden layers with deep learning classifier had been used in this work. Data set from four different subjects were collected using a 14 channels Emotiv epoc+ device. A classification results with accuracy 95.75% were obtained ‎for the collected samples. An optimization process was performed on the predicted class with the aid of user, and then sign class will be connected to the specified sentence under a predesigned lock up table.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Zohreh Gholami Doborjeh ◽  
Nikola Kasabov ◽  
Maryam Gholami Doborjeh ◽  
Alexander Sumich

Sign in / Sign up

Export Citation Format

Share Document