scholarly journals fNIRS Signal Classification Based on Deep Learning in Rock-Paper-Scissors Imagery Task

2021 ◽  
Vol 11 (11) ◽  
pp. 4922
Author(s):  
Tengfei Ma ◽  
Wentian Chen ◽  
Xin Li ◽  
Yuting Xia ◽  
Xinhua Zhu ◽  
...  

To explore whether the brain contains pattern differences in the rock–paper–scissors (RPS) imagery task, this paper attempts to classify this task using fNIRS and deep learning. In this study, we designed an RPS task with a total duration of 25 min and 40 s, and recruited 22 volunteers for the experiment. We used the fNIRS acquisition device (FOIRE-3000) to record the cerebral neural activities of these participants in the RPS task. The time series classification (TSC) algorithm was introduced into the time-domain fNIRS signal classification. Experiments show that CNN-based TSC methods can achieve 97% accuracy in RPS classification. CNN-based TSC method is suitable for the classification of fNIRS signals in RPS motor imagery tasks, and may find new application directions for the development of brain–computer interfaces (BCI).

2005 ◽  
Vol 17 (10) ◽  
pp. 2139-2175 ◽  
Author(s):  
Naoki Masuda ◽  
Brent Doiron ◽  
André Longtin ◽  
Kazuyuki Aihara

Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations and by analytical arguments based on frequency response and bifurcation theory. This distinction is similar to the class I versus class II classification of single neurons according to the bifurcation from quiescence to periodic firing, and the two modes depend differently on system parameters. These two mechanisms may be associated with different types of information processing.


Author(s):  
Ahmad Danial Abdul Rahman ◽  
Hanim Hussin

<span>Neurotechnology has led to the development of Brain-Computer Interfaces (BCIs) or Brain-Machine Interfaces (BMIs) which are devices that use brain transmission signal to operate. Electroencephalography (EEG) is one of the recent methods that could retrieve transmission signal of the brain from scalp safely. This paper will discuss the development of Neuroprosthetics limb by using patients’ attention and meditation level to produce movement. The main objective of this project is to restore mobility of patients that have suffered from motor disabilities. This project is carried out by interfacing the data acquisition device which is NeuroSky Mindwaves Headset with the microcontroller to move the prosthetic arm as the output. Arduino Nano microcontroller acts as data processing and a controller to the arm as the output. The prosthetic arm is designed by using SOLIDWORKS software and fabricated by 3D printed. From this project, the user will be able to control the prosthetic arm ranging from rotating the hand to bending the fingers creating a grasp and release gesture.</span>


2020 ◽  
Author(s):  
Nicos Maglaveras ◽  
Georgios Petmezas ◽  
Vassilis Kilintzis ◽  
Leandros Stefanopoulos ◽  
Andreas Tzavelis ◽  
...  

BACKGROUND Electrocardiogram (ECG) recording and interpretation is the most common method used for the diagnosis of cardiac arrhythmias, nonetheless this process requires significant expertise and effort from the doctors’ perspective. Automated ECG signal classification could be a useful technique for the accurate detection and classification of several types of arrhythmias within a short timeframe. OBJECTIVE To review current approaches using state-of-the-art CNNs and deep learning methodologies in arrhythmia detection via ECG feature classification techniques and propose an optimised architecture capable of different types of arrhythmia diagnosis using publicly existing annotated arrhythmia databases from the MIT-BIH databases available at PHYSIONET (physionet.org) . METHODS A hybrid CNN-LSTM deep learning model is proposed to classify beats derived from two large ECG databases. The approach is proposed after a systematic review of current AI/DL methods applied in different types of arrhythmia diagnosis using the same public MIT-BIH databases. In the proposed architecture the CNN part carries out feature extraction and dimensionality reduction, and the LSTM part performs classification of the encoded ECG beat signals. RESULTS In experimental studies conducted with the MIT-BIH Arrhythmia and the MIT-BIH Atrial Fibrillation Databases average accuracies of 96.82% and 96.65% were noted respectively. CONCLUSIONS The proposed system can be used for arrhythmia diagnosis in clinical and mHealth applications managing a number of prevalent arrhythmias such as VT, AFIB, LBBB etc. The capability of CNNs to reduce the ECG beat signal’s size and extract its main features can be effectively combined with the LSTMs’ capability to learn the temporal dynamics of the input data for the accurate and automatic recognition of several types of cardiac arrhythmias. CLINICALTRIAL Not applicable.


Author(s):  
A. Vasantharaj ◽  
Pacha Shoba Rani ◽  
Sirajul Huque ◽  
K. S. Raghuram ◽  
R. Ganeshkumar ◽  
...  

Earlier identification of brain tumor (BT) is essential to increase the survival rate of the patients. The commonly used imaging technique for BT diagnosis is magnetic resonance imaging (MRI). Automated BT classification model is required for assisting the radiologists to save time and enhance efficiency. The classification of BT is difficult owing to the non-uniform shapes of tumors and location of tumors in the brain. Therefore, deep learning (DL) models can be employed for the effective identification, prediction, and diagnosis of diseases. In this view, this paper presents an automated BT diagnosis using rat swarm optimization (RSO) with deep learning based capsule network (DLCN) model, named RSO-DLCN model. The presented RSO-DLCN model involves bilateral filtering (BF) based preprocessing to enhance the quality of the MRI. Besides, non-iterative grabcut based segmentation (NIGCS) technique is applied to detect the affected tumor regions. In addition, DLCN model based feature extractor with RSO algorithm based parameter optimization processes takes place. Finally, extreme learning machine with stacked autoencoder (ELM-SA) based classifier is employed for the effective classification of BT. For validating the BT diagnostic performance of the presented RSO-DLCN model, an extensive set of simulations were carried out and the results are inspected under diverse dimensions. The simulation outcome demonstrated the promising results of the RSO-DLCN model on BT diagnosis with the sensitivity of 98.4%, specificity of 99%, and accuracy of 98.7%.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2443
Author(s):  
Jayro Martínez-Cerveró ◽  
Majid Khalili Ardali ◽  
Andres Jaramillo-Gonzalez ◽  
Shizhe Wu ◽  
Alessandro Tonin ◽  
...  

Electrooculography (EOG) signals have been widely used in Human-Computer Interfaces (HCI). The HCI systems proposed in the literature make use of self-designed or closed environments, which restrict the number of potential users and applications. Here, we present a system for classifying four directions of eye movements employing EOG signals. The system is based on open source ecosystems, the Raspberry Pi single-board computer, the OpenBCI biosignal acquisition device, and an open-source python library. The designed system provides a cheap, compact, and easy to carry system that can be replicated or modified. We used Maximum, Minimum, and Median trial values as features to create a Support Vector Machine (SVM) classifier. A mean of 90% accuracy was obtained from 7 out of 10 subjects for online classification of Up, Down, Left, and Right movements. This classification system can be used as an input for an HCI, i.e., for assisted communication in paralyzed people.


Author(s):  
Ashwini S. R. ◽  
H. C. Nagaraj

The brain-computer-interfaces (BCI) can also be referred towards a mindmachine interface that can provide a non-muscular communication channel in between the computer device and human brain. To measure the brain activity, electroencephalography (EEG) has been widely utilized in the applications of BCI to work system in real-time. It has been analyzed that the identification probability performed with other methodologies do not provide optimal classification accuracy. Therefore, it is required to focus on the process of feature extraction to achieve maximum classification accuracy. In this paper, a novel process of data-driven spatial has been proposed to improve the detection of steady state visually evoked potentials (SSVEPs) at BCI. Here, EACA has been proposed, which can develop the reproducibility of SSVEP across many trails. Further this can be utilized to improve the SSVEP from a noisy data signal by eliminating the activities of EEG background. In the simulation process, the SSVEP dataset recorded from given 11 subjects are considered. To validate the performance, the state-of-art method is considered to compare with the EDCA based proposed approach.


2021 ◽  
Vol 11 (10) ◽  
pp. 1365
Author(s):  
Denis Hepbasli ◽  
Sina Gredy ◽  
Melanie Ullrich ◽  
Amelie Reigl ◽  
Marco Abeßer ◽  
...  

Vocalization is an important part of social communication, not only for humans but also for mice. Here, we show in a mouse model that functional deficiency of Sprouty-related EVH1 domain-containing 2 (SPRED2), a protein ubiquitously expressed in the brain, causes differences in social ultrasound vocalizations (USVs), using an uncomplicated and reliable experimental setting of a short meeting of two individuals. SPRED2 mutant mice show an OCD-like behaviour, accompanied by an increased release of stress hormones from the hypothalamic–pituitary–adrenal axis, both factors probably influencing USV usage. To determine genotype-related differences in USV usage, we analyzed call rate, subtype profile, and acoustic parameters (i.e., duration, bandwidth, and mean peak frequency) in young and old SPRED2-KO mice. We recorded USVs of interacting male and female mice, and analyzed the calls with the deep-learning DeepSqueak software, which was trained to recognize and categorize the emitted USVs. Our findings provide the first classification of SPRED2-KO vs. wild-type mouse USVs using neural networks and reveal significant differences in their development and use of calls. Our results show, first, that simple experimental settings in combination with deep learning are successful at identifying genotype-dependent USV usage and, second, that SPRED2 deficiency negatively affects the vocalization usage and social communication of mice.


2020 ◽  
Author(s):  
Marthe Tibo ◽  
Simon Geirnaert ◽  
Alexander Bertrand

ABSTRACTWhen listening to music, the brain generates a neural response that follows the amplitude envelope of the musical sound. Previous studies have shown that it is possible to decode this envelope-following response from electroencephalography (EEG) data during music perception. However, a successful decoding and recognition of imagined music, without the physical presentation of a music stimulus, has not been established to date. During music imagination, the human brain internally replays a musical sound, which naturally leads to the hypothesis that a similar envelope-following response might be generated. In this study, we demonstrate that this response is indeed present during music imagination and that it can be decoded from EEG data. Furthermore, we show that the decoded envelope allows for classification of imagined music in a song recognition task, containing tracks with lyrics as well as purely instrumental tasks. A two-song classifier achieves a median accuracy of 95%, while a 12-song classifier achieves a median accuracy of 66.7%. The results of this study demonstrate the feasibility of decoding imagined music, thereby setting the stage for new neuroscientific experiments in this area as well as for new types of brain-computer interfaces based on music imagination.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3243 ◽  
Author(s):  
Nagaraj Yamanakkanavar ◽  
Jae Young Choi ◽  
Bumshik Lee

Many neurological diseases and delineating pathological regions have been analyzed, and the anatomical structure of the brain researched with the aid of magnetic resonance imaging (MRI). It is important to identify patients with Alzheimer’s disease (AD) early so that preventative measures can be taken. A detailed analysis of the tissue structures from segmented MRI leads to a more accurate classification of specific brain disorders. Several segmentation methods to diagnose AD have been proposed with varying complexity. Segmentation of the brain structure and classification of AD using deep learning approaches has gained attention as it can provide effective results over a large set of data. Hence, deep learning methods are now preferred over state-of-the-art machine learning methods. We aim to provide an outline of current deep learning-based segmentation approaches for the quantitative analysis of brain MRI for the diagnosis of AD. Here, we report how convolutional neural network architectures are used to analyze the anatomical brain structure and diagnose AD, discuss how brain MRI segmentation improves AD classification, describe the state-of-the-art approaches, and summarize their results using publicly available datasets. Finally, we provide insight into current issues and discuss possible future research directions in building a computer-aided diagnostic system for AD.


Digital image processing is a rising field for the investigation of complicated diseases such as brain tumor, breast cancer, kidney stones, lung cancer, ovarian cancer, and cervix cancer and so on. The recognition of the brain tumor is considered to be a very critical task. A number of approaches are used for the scanning of a particular body part like CT scan, X-rays, and Magnetic Resonance Image (MRI). These pictures are then examined by the surgeons for the removal of the problem. The main objective of examining these MRI images (mainly) is to extract the meaningful information with high accuracy. Machine Learning and Deep Learning algorithms are mainly used for analysing the medical images which can identify, localize and classify the brain tumor into sub categories, according to which the diagnosis would be done by the professionals. In this paper, we have discussed the different techniques that are used for tumor pre-processing, segmentation, localization, extraction of features and classification and summarize more than 30 contributions to this field. Also, we discussed the existing state-of-the-art, literature gaps, open challenges and future scope in this area.


Sign in / Sign up

Export Citation Format

Share Document