scholarly journals User State Classification Based on Functional Brain Connectivity Using a Convolutional Neural Network

Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1158
Author(s):  
Seung-Min Park ◽  
Hong-Gi Yeom ◽  
Kwee-Bo Sim

The brain–computer interface (BCI) is a promising technology where a user controls a robot or computer by thinking with no movement. There are several underlying principles to implement BCI, such as sensorimotor rhythms, P300, steady-state visually evoked potentials, and directional tuning. Generally, different principles are applied to BCI depending on the application, because strengths and weaknesses vary according to each BCI method. Therefore, BCI should be able to predict a user state to apply suitable principles to the system. This study measured electroencephalography signals in four states (resting, speech imagery, leg-motor imagery, and hand-motor imagery) from 10 healthy subjects. Mutual information from 64 channels was calculated as brain connectivity. We used a convolutional neural network to predict a user state, where brain connectivity was the network input. We applied five-fold cross-validation to evaluate the proposed method. Mean accuracy for user state classification was 88.25 ± 2.34%. This implies that the system can change the BCI principle using brain connectivity. Thus, a BCI user can control various applications according to their intentions.

2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.


2020 ◽  
Vol 21 (16) ◽  
pp. 5710
Author(s):  
Xiao Wang ◽  
Yinping Jin ◽  
Qiuwen Zhang

Mitochondrial proteins are physiologically active in different compartments, and their abnormal location will trigger the pathogenesis of human mitochondrial pathologies. Correctly identifying submitochondrial locations can provide information for disease pathogenesis and drug design. A mitochondrion has four submitochondrial compartments, the matrix, the outer membrane, the inner membrane, and the intermembrane space, but various existing studies ignored the intermembrane space. The majority of researchers used traditional machine learning methods for predicting mitochondrial protein localization. Those predictors required expert-level knowledge of biology to be encoded as features rather than allowing the underlying predictor to extract features through a data-driven procedure. Besides, few researchers have considered the imbalance in datasets. In this paper, we propose a novel end-to-end predictor employing deep neural networks, DeepPred-SubMito, for protein submitochondrial location prediction. First, we utilize random over-sampling to decrease the influence caused by unbalanced datasets. Next, we train a multi-channel bilayer convolutional neural network for multiple subsequences to learn high-level features. Third, the prediction result is outputted through the fully connected layer. The performance of the predictor is measured by 10-fold cross-validation and 5-fold cross-validation on the SM424-18 dataset and the SubMitoPred dataset, respectively. Experimental results show that the predictor outperforms state-of-the-art predictors. In addition, the prediction of results in the M983 dataset also confirmed its effectiveness in predicting submitochondrial locations.


2020 ◽  
pp. 1-14
Author(s):  
Xiangmin Lun ◽  
Zhenglin Yu ◽  
Fang Wang ◽  
Tao Chen ◽  
Yimin Hou

In order to develop an efficient brain-computer interface system, the brain activity measured by electroencephalography needs to be accurately decoded. In this paper, a motor imagery classification approach is proposed, combining virtual electrodes on the cortex layer with a convolutional neural network; this can effectively improve the decoding performance of the brain-computer interface system. A three layer (cortex, skull, and scalp) head volume conduction model was established by using the symmetric boundary element method to map the scalp signal to the cortex area. Nine pairs of virtual electrodes were created on the cortex layer, and the features of the time and frequency sequence from the virtual electrodes were extracted by performing time-frequency analysis. Finally, the convolutional neural network was used to classify motor imagery tasks. The results show that the proposed approach is convergent in both the training model and the test model. Based on the Physionet motor imagery database, the averaged accuracy can reach 98.32% for a single subject, while the averaged values of accuracy, Kappa, precision, recall, and F1-score on the group-wise are 96.23%, 94.83%, 96.21%, 96.13%, and 96.14%, respectively. Based on the High Gamma database, the averaged accuracy has achieved 96.37% and 91.21% at the subject and group levels, respectively. Moreover, this approach is superior to those of other studies on the same database, which suggests robustness and adaptability to individual variability.


Author(s):  
Abdul Kholik ◽  
Agus Harjoko ◽  
Wahyono Wahyono

The volume density of vehicles is a problem that often occurs in every city, as for the impact of vehicle density is congestion. Classification of vehicle density levels on certain roads is required because there are at least 7 vehicle density level conditions. Monitoring conducted by the police, the Department of Transportation and the organizers of the road currently using video-based surveillance such as CCTV that is still monitored by people manually. Deep Learning is an approach of synthetic neural network-based learning machines that are actively developed and researched lately because it has succeeded in delivering good results in solving various soft-computing problems, This research uses the convolutional neural network architecture. This research tries to change the supporting parameters on the convolutional neural network to further calibrate the maximum accuracy. After the experiment changed the parameters, the classification model was tested using K-fold cross-validation, confusion matrix and model exam with data testing. On the K-fold cross-validation test with an average yield of 92.83% with a value of K (fold) = 5, model testing is done by entering data testing amounting to 100 data, the model can predict or classify correctly i.e. 81 data.


This study aims to find the optimal learning algorithm parameter, model and connection, initialization weight and normalization method using fused Convolutional Neural Network (CNN) for facial expression recognition. The best model and parameters are identified using a ten-fold cross validation method. By determining these ideal elements, a superior accuracy can potentially be achieved. CNN was utilized to a group of seven emotions from various facial expressions, namely, happy, sad, angry, surprise, disgust, fear and neutral. The four layer CNN configuration was prepared with the JAFFE dataset, and yielded an overall accuracy of 83.72%. The outcome demonstrates that the fused CNN with the mentioned aims can generate higher accuracy with a smaller network compared to related models.


Techno Com ◽  
2021 ◽  
Vol 20 (1) ◽  
pp. 166-174
Author(s):  
Mohammad Farid Naufal ◽  
Solichul Huda ◽  
Aryo Budilaksono ◽  
Wisnu Aria Yustisia ◽  
Astri Agustina Arius ◽  
...  

Permainan batu, gunting, dan kertas sangat populer di seluruh dunia. Permainan ini biasanya dimainkan saat sedang berkumpul untuk mengundi ataupun hanya bermain untuk mengetahui yang menang dan yang kalah. Namun, perkembangan zaman dan teknologi mengakibatkan orang dapat berkumpul secara virtual. Untuk bisa melakukan permainan ini secara virtual, penelitian ini membuat model klasifikasi citra untuk membedakan objek tangan yang menunjuk batu, kertas, dan gunting. Performa metode klasifikasi merupakan hal yang harus diperhatikan dalam kasus ini. Salah satu metode klasifikasi citra yang populer adalah Convolutional Neural Network (CNN). CNN adalah salah satu jenis neural network yang biasa digunakan pada data klasifikasi citra. CNN terinspirasi dari jaringan syaraf manusia. Algoritma ini memiliki 3 tahapan yang dipakai, yaitu convolutional layer, pooling layer, dan fully connected layer. Uji coba 5-Fold cross validation klasifikasi objek tangan yang menunjuk citra batu, kertas, dan gunting menggunakan CNN pada penelitian ini menghasilkan rata-rata akurasi sebesar 97.66%.


2020 ◽  
Vol 4 (1) ◽  
pp. 45-51
Author(s):  
Ari Peryanto ◽  
Anton Yudhana ◽  
Rusydi Umar

Image classification is a fairly easy task for humans, but for machines it is something that is very complex and is a major problem in the field of Computer Vision which has long been sought for a solution. There are many algorithms used for image classification, one of which is Convolutional Neural Network, which is the development of Multi Layer Perceptron (MLP) and is one of the algorithms of Deep Learning. This method has the most significant results in image recognition, because this method tries to imitate the image recognition system in the human visual cortex, so it has the ability to process image information. In this research the implementation of this method is done by using the Keras library with the Python programming language. The results showed the percentage of accuracy with K = 5 cross-validation obtained the highest level of accuracy of 80.36% and the highest average accuracy of 76.49%, and system accuracy of 72.02%. For the lowest accuracy obtained in the 4th and 5th testing with an accuracy value of 66.07%. The system that has been made has also been able to predict with the highest average prediction of 60.31%, and the highest prediction value of 65.47%.


Techno Com ◽  
2020 ◽  
Vol 19 (4) ◽  
pp. 459-467
Author(s):  
Rahmat Widadi ◽  
Bongga Arif Widodo ◽  
Dodi Zulherman

Pemanfaatan sistem Brain-Computer Interface (BCI) sebagai penghubung pikiran manusia dengan peralatan eksternal sangat bergantung pada keakuratan pengklasifikasian dan pengidentifikasian sinyal EEG khususnya gerak motor imagery. Kesuksesan deep learning, sebagai contoh Convolutional Neural Network (CNN), dalam proses klasifikasi pada berbagai bidang berpeluang untuk diimplementasikan pada klasifikasi gerak motor imagery. Pengimplementasian CNN untuk klasifikasi sinyal EEG motor imagery (MI-EEG) gerakan jari tangan diperkenalkan dalam tulisan ini. Rancangan sistem klasifikasi terdiri dari dua bagian yaitu convolution layer dan multilayer perceptron yang diimplementasikan menggunakan Python 3.7 dengan library TensorFlow 2.0 (Keras). Pengujian rancangan sistem dilakukan terhadap lima subjek dari data MI-EEG 5F dengan frekuensi pencuplikan 200 Hz. Pengujian melibatkan Kfold-cross validation dan analisis pada confusion matrix. Berdasarkan hasil pengujian, peningkatan ukuran kernel menghasilkan peningkatan rata-rata akurasi sistem. Sistem dengan akurasi terbaik diperoleh pada rancangan dengan jumlah kernel 50 sebesar 51,711%. Rancangan sistem menghasilkan kinerja yang melebihi hasil penelitian yang menjadi rujukan utama.


2021 ◽  
pp. 1-10
Author(s):  
Chien-Cheng Leea ◽  
Zhongjian Gao ◽  
Xiu-Chi Huanga

This paper proposes a Wi-Fi-based indoor human detection system using a deep convolutional neural network. The system detects different human states in various situations, including different environments and propagation paths. The main improvements proposed by the system is that there is no cameras overhead and no sensors are mounted. This system captures useful amplitude information from the channel state information and converts this information into an image-like two-dimensional matrix. Next, the two-dimensional matrix is used as an input to a deep convolutional neural network (CNN) to distinguish human states. In this work, a deep residual network (ResNet) architecture is used to perform human state classification with hierarchical topological feature extraction. Several combinations of datasets for different environments and propagation paths are used in this study. ResNet’s powerful inference simplifies feature extraction and improves the accuracy of human state classification. The experimental results show that the fine-tuned ResNet-18 model has good performance in indoor human detection, including people not present, people still, and people moving. Compared with traditional machine learning using handcrafted features, this method is simple and effective.


Sign in / Sign up

Export Citation Format

Share Document