scholarly journals Arrangements of Resting State Electroencephalography as the Input to Convolutional Neural Network for Biometric Identification

2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Chi Qin Lai ◽  
Haidi Ibrahim ◽  
Mohd Zaid Abdullah ◽  
Jafri Malin Abdullah ◽  
Shahrel Azmin Suandi ◽  
...  

Biometric is an important field that enables identification of an individual to access their sensitive information and asset. In recent years, electroencephalography- (EEG-) based biometrics have been popularly explored by researchers because EEG is able to distinct between two individuals. The literature reviews have shown that convolutional neural network (CNN) is one of the classification approaches that can avoid the complex stages of preprocessing, feature extraction, and feature selection. Therefore, CNN is suggested to be one of the efficient classifiers for biometric identification. Conventionally, input to CNN can be in image or matrix form. The objective of this paper is to explore the arrangement of EEG for CNN input to investigate the most suitable input arrangement of EEG towards the performance of EEG-based identification. EEG datasets that are used in this paper are resting state eyes open (REO) and resting state eyes close (REC) EEG. Six types of data arrangement are compared in this paper. They are matrix of amplitude versus time, matrix of energy versus time, matrix of amplitude versus time for rearranged channels, image of amplitude versus time, image of energy versus time, and image of amplitude versus time for rearranged channels. It was found that the matrix of amplitude versus time for each rearranged channels using the combination of REC and REO performed the best for biometric identification, achieving validation accuracy and test accuracy of 83.21% and 79.08%, respectively.

2020 ◽  
Vol 10 (2) ◽  
pp. 84 ◽  
Author(s):  
Atif Mehmood ◽  
Muazzam Maqsood ◽  
Muzaffar Bashir ◽  
Yang Shuyuan

Alzheimer’s disease (AD) may cause damage to the memory cells permanently, which results in the form of dementia. The diagnosis of Alzheimer’s disease at an early stage is a problematic task for researchers. For this, machine learning and deep convolutional neural network (CNN) based approaches are readily available to solve various problems related to brain image data analysis. In clinical research, magnetic resonance imaging (MRI) is used to diagnose AD. For accurate classification of dementia stages, we need highly discriminative features obtained from MRI images. Recently advanced deep CNN-based models successfully proved their accuracy. However, due to a smaller number of image samples available in the datasets, there exist problems of over-fitting hindering the performance of deep learning approaches. In this research, we developed a Siamese convolutional neural network (SCNN) model inspired by VGG-16 (also called Oxford Net) to classify dementia stages. In our approach, we extend the insufficient and imbalanced data by using augmentation approaches. Experiments are performed on a publicly available dataset open access series of imaging studies (OASIS), by using the proposed approach, an excellent test accuracy of 99.05% is achieved for the classification of dementia stages. We compared our model with the state-of-the-art models and discovered that the proposed model outperformed the state-of-the-art models in terms of performance, efficiency, and accuracy.


IBRO Reports ◽  
2019 ◽  
Vol 6 ◽  
pp. S425
Author(s):  
Shin-Young Kang ◽  
Youngwoon Choi ◽  
Seung-Ho Paik ◽  
V. Zephaniah Phillips ◽  
Beop-Min Kim

2020 ◽  
Vol 10 (3) ◽  
pp. 681-687
Author(s):  
Danyang Ma ◽  
Genke Yang ◽  
Zeya Li ◽  
Haichun Liu ◽  
Changchun Pan ◽  
...  

Schizophrenia is a severe mental disorder that can result in hallucinations, delusions, and extremely disordered thinking and behavior. While electroencephalography (EEG) has been used as an auxiliary tool for diagnostic purposes in several recent studies, all EEG channels are treated homogeneously without addressing the dominance of certain channels. The main purpose of this study is to obtain the weight value of each channel as the quantitative representation of influence of each scalp area on the classification of schizophrenia phases, and then to apply the weight values to improve the accuracy of classification. We propose a new convolutional neural network (CNN) structure based on AlexNet to derive weight values as weight layer and classify the samples better. Our results show that the modified CNN structure achieves better performance in terms of time consumption and classification accuracy compared with the original classifier. Also, the visualization of the weight layer in our model indicates possible correlations between scalp areas and schizophrenia conditions, which may benefit future pathological study.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1976
Author(s):  
Jingyu Kim ◽  
Su Young Jeong ◽  
Byung-Chul Kim ◽  
Byung-Hyun Byun ◽  
Ilhan Lim ◽  
...  

We compared the accuracy of prediction of the response to neoadjuvant chemotherapy (NAC) in osteosarcoma patients between machine learning approaches of whole tumor utilizing fluorine−18fluorodeoxyglucose (18F-FDG) uptake heterogeneity features and a convolutional neural network of the intratumor image region. In 105 patients with osteosarcoma, 18F-FDG positron emission tomography/computed tomography (PET/CT) images were acquired before (baseline PET0) and after NAC (PET1). Patients were divided into responders and non-responders about neoadjuvant chemotherapy. Quantitative 18F-FDG heterogeneity features were calculated using LIFEX version 4.0. Receiver operating characteristic (ROC) curve analysis of 18F-FDG uptake heterogeneity features was used to predict the response to NAC. Machine learning algorithms and 2-dimensional convolutional neural network (2D CNN) deep learning networks were estimated for predicting NAC response with the baseline PET0 images of the 105 patients. ML was performed using the entire tumor image. The accuracy of the 2D CNN prediction model was evaluated using total tumor slices, the center 20 slices, the center 10 slices, and center slice. A total number of 80 patients was used for k-fold validation by five groups with 16 patients. The CNN network test accuracy estimation was performed using 25 patients. The areas under the ROC curves (AUCs) for baseline PET maximum standardized uptake value (SUVmax), total lesion glycolysis (TLG), metabolic tumor volume (MTV), and gray level size zone matrix (GLSZM) were 0.532, 0.507, 0.510, and 0.626, respectively. The texture features test accuracy of machine learning by random forest and support vector machine were 0.55 and 0. 54, respectively. The k-fold validation accuracy and validation accuracy were 0.968 ± 0.01 and 0.610 ± 0.04, respectively. The test accuracy of total tumor slices, the center 20 slices, center 10 slices, and center slices were 0.625, 0.616, 0.628, and 0.760, respectively. The prediction model for NAC response with baseline PET0 texture features machine learning estimated a poor outcome, but the 2D CNN network using 18F-FDG baseline PET0 images could predict the treatment response before prior chemotherapy in osteosarcoma. Additionally, using the 2D CNN prediction model using a tumor center slice of 18F-FDG PET images before NAC can help decide whether to perform NAC to treat osteosarcoma patients.


Fruit grading is a process that affect quality control and fruit-processing industries to meet the efficiency of its production and society. However, these industries have suffered from lack of standards in quality control, higher time of grading and low product output because of the use of manual methods. To meet the increasing demand of quality fruit products, fruit-processing industries must consider automating their fruit grading process. Several algorithms have been proposed over the years to achieve this purpose and their works were based on color, shape and inability to handle large dataset which resulted in slow recognition accuracy. To mitigate these flaws, we develop an automated system for grading and classification of apple using Convolutional Neural Network (CNN) used in image recognition and classification. Two models were developed from CNN using ResNet50 as its convolutional base, a process called transfer learning. The first model, the apple checker model (ACM) performs the recognition of the image with two output connections (apple and non-apple) while the apple grader model (AGM) does the classification of the image that has four output classes (spoiled, grade A, grade B & grade C) if the image is an apple. A comparison evaluation of both models were conducted and experimental results show that the ACM achieved a test accuracy of 100% while the AGM obtained recognition rate of 99.89%.The developed system may be employed in food processing industries and related life applications.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1186
Author(s):  
Ranjana Koshy ◽  
Ausif Mahmood

Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4996 ◽  
Author(s):  
Haneul Jeon ◽  
Sang Lae Kim ◽  
Soyeon Kim ◽  
Donghun Lee

Classification of foot–ground contact phases, as well as the swing phase is essential in biomechanics domains where lower-limb motion analysis is required; this analysis is used for lower-limb rehabilitation, walking gait analysis and improvement, and exoskeleton motion capture. In this study, sliding-window label overlapping of time-series wearable motion data in training dataset acquisition is proposed to accurately detect foot–ground contact phases, which are composed of 3 sub-phases as well as the swing phase, at a frequency of 100 Hz with a convolutional neural network (CNN) architecture. We not only succeeded in developing a real-time CNN model for learning and obtaining a test accuracy of 99.8% or higher, but also confirmed that its validation accuracy was close to 85%.


2018 ◽  
Vol 7 (4.11) ◽  
pp. 202 ◽  
Author(s):  
Mohd Shahrum Md Guntor ◽  
Rohilak Sahak ◽  
Azlee Zabidi ◽  
Nooritawati Md Tahir ◽  
Ihsan Mohd Yassin ◽  
...  

Biometric identification systems have recently made exponential advancements in term of complexity and accuracy in recognition for security purposes and a variety of other application. In this paper, a Convolutional Neural Network (CNN) based gait recognition system using Microsoft Kinect skeletal joint data points is proposed for human identification. A total of 23 subjects were used for the experiments. The subjects were positioned 45 degrees (oblique view) from Kinect. A CNN based on the modified AlexNet structure was used to fit the different input data size. The results indicate that the training and testing accuracies were 100% and 69.6% respectively.  


2020 ◽  
Vol 21 (16) ◽  
pp. 5710
Author(s):  
Xiao Wang ◽  
Yinping Jin ◽  
Qiuwen Zhang

Mitochondrial proteins are physiologically active in different compartments, and their abnormal location will trigger the pathogenesis of human mitochondrial pathologies. Correctly identifying submitochondrial locations can provide information for disease pathogenesis and drug design. A mitochondrion has four submitochondrial compartments, the matrix, the outer membrane, the inner membrane, and the intermembrane space, but various existing studies ignored the intermembrane space. The majority of researchers used traditional machine learning methods for predicting mitochondrial protein localization. Those predictors required expert-level knowledge of biology to be encoded as features rather than allowing the underlying predictor to extract features through a data-driven procedure. Besides, few researchers have considered the imbalance in datasets. In this paper, we propose a novel end-to-end predictor employing deep neural networks, DeepPred-SubMito, for protein submitochondrial location prediction. First, we utilize random over-sampling to decrease the influence caused by unbalanced datasets. Next, we train a multi-channel bilayer convolutional neural network for multiple subsequences to learn high-level features. Third, the prediction result is outputted through the fully connected layer. The performance of the predictor is measured by 10-fold cross-validation and 5-fold cross-validation on the SM424-18 dataset and the SubMitoPred dataset, respectively. Experimental results show that the predictor outperforms state-of-the-art predictors. In addition, the prediction of results in the M983 dataset also confirmed its effectiveness in predicting submitochondrial locations.


Sign in / Sign up

Export Citation Format

Share Document