WTRPNet: An Explainable Graph Feature Convolutional Neural Network for Epileptic EEG Classification

Author(s):  
Qi Xin ◽  
Shaohao Hu ◽  
Shuaiqi Liu ◽  
Ling Zhao ◽  
Shuihua Wang

As one of the important tools of epilepsy diagnosis, the electroencephalogram (EEG) is noninvasive and presents no traumatic injury to patients. It contains a lot of physiological and pathological information that is easy to obtain. The automatic classification of epileptic EEG is important in the diagnosis and therapeutic efficacy of epileptics. In this article, an explainable graph feature convolutional neural network named WTRPNet is proposed for epileptic EEG classification. Since WTRPNet is constructed by a recurrence plot in the wavelet domain, it can fully obtain the graph feature of the EEG signal, which is established by an explainable graph features extracted layer called WTRP block . The proposed method shows superior performance over state-of-the-art methods. Experimental results show that our algorithm has achieved an accuracy of 99.67% in classification of focal and nonfocal epileptic EEG, which proves the effectiveness of the classification and detection of epileptic EEG.

2021 ◽  
pp. 1-10
Author(s):  
Gayatri Pattnaik ◽  
Vimal K. Shrivastava ◽  
K. Parvathi

Pests are major threat to economic growth of a country. Application of pesticide is the easiest way to control the pest infection. However, excessive utilization of pesticide is hazardous to environment. The recent advances in deep learning have paved the way for early detection and improved classification of pest in tomato plants which will benefit the farmers. This paper presents a comprehensive analysis of 11 state-of-the-art deep convolutional neural network (CNN) models with three configurations: transfers learning, fine-tuning and scratch learning. The training in transfer learning and fine tuning initiates from pre-trained weights whereas random weights are used in case of scratch learning. In addition, the concept of data augmentation has been explored to improve the performance. Our dataset consists of 859 tomato pest images from 10 categories. The results demonstrate that the highest classification accuracy of 94.87% has been achieved in the transfer learning approach by DenseNet201 model with data augmentation.


2020 ◽  
Vol 10 (2) ◽  
pp. 84 ◽  
Author(s):  
Atif Mehmood ◽  
Muazzam Maqsood ◽  
Muzaffar Bashir ◽  
Yang Shuyuan

Alzheimer’s disease (AD) may cause damage to the memory cells permanently, which results in the form of dementia. The diagnosis of Alzheimer’s disease at an early stage is a problematic task for researchers. For this, machine learning and deep convolutional neural network (CNN) based approaches are readily available to solve various problems related to brain image data analysis. In clinical research, magnetic resonance imaging (MRI) is used to diagnose AD. For accurate classification of dementia stages, we need highly discriminative features obtained from MRI images. Recently advanced deep CNN-based models successfully proved their accuracy. However, due to a smaller number of image samples available in the datasets, there exist problems of over-fitting hindering the performance of deep learning approaches. In this research, we developed a Siamese convolutional neural network (SCNN) model inspired by VGG-16 (also called Oxford Net) to classify dementia stages. In our approach, we extend the insufficient and imbalanced data by using augmentation approaches. Experiments are performed on a publicly available dataset open access series of imaging studies (OASIS), by using the proposed approach, an excellent test accuracy of 99.05% is achieved for the classification of dementia stages. We compared our model with the state-of-the-art models and discovered that the proposed model outperformed the state-of-the-art models in terms of performance, efficiency, and accuracy.


2020 ◽  
Vol 17 (6) ◽  
pp. 172988142096696
Author(s):  
Jie Niu ◽  
Kun Qian

In this work, we propose a robust place recognition measurement in natural environments based on salient landmark screening and convolutional neural network (CNN) features. First, the salient objects in the image are segmented as candidate landmarks. Then, a category screening network is designed to remove specific object types that are not suitable for environmental modeling. Finally, a three-layer CNN is used to get highly representative features of the salient landmarks. In the similarity measurement, a Siamese network is chosen to calculate the similarity between images. Experiments were conducted on three challenging benchmark place recognition datasets and superior performance was achieved compared to other state-of-the-art methods, including FABMAP, SeqSLAM, SeqCNNSLAM, and PlaceCNN. Our method obtains the best results on the precision–recall curves, and the average precision reaches 78.43%, which is the best of the comparison methods. This demonstrates that the CNN features on the screened salient landmarks can be against a strong viewpoint and condition variations.


2020 ◽  
Vol 10 (5) ◽  
pp. 1040-1048 ◽  
Author(s):  
Xianwei Jiang ◽  
Liang Chang ◽  
Yu-Dong Zhang

More than 35 million patients are suffering from Alzheimer’s disease and this number is growing, which puts a heavy burden on countries around the world. Early detection is of benefit, in which the deep learning can aid AD identification effectively and gain ideal results. A novel eight-layer convolutional neural network with batch normalization and dropout techniques for classification of Alzheimer’s disease was proposed. After data augmentation, the training dataset contained 7399 AD patient and 7399 HC subjects. Our eight-layer CNN-BN-DO-DA method yielded a sensitivity of 97.77%, a specificity of 97.76%, a precision of 97.79%, an accuracy of 97.76%, a F1 of 97.76%, and a MCC of 95.56% on the test set, which achieved the best performance in seven state-of-the-art approaches. The results strongly demonstrate that this method can effectively assist the clinical diagnosis of Alzheimer’s disease.


2020 ◽  
Vol 10 (2) ◽  
pp. 469 ◽  
Author(s):  
Athanasios Anagnostis ◽  
Gavriela Asiminari ◽  
Elpiniki Papageorgiou ◽  
Dionysis Bochtis

Anthracnose is a fungal disease that infects a large number of trees worldwide, damages intensively the canopy, and spreads with ease to neighboring trees, resulting in the potential destruction of whole crops. Even though it can be treated relatively easily with good sanitation, proper pruning and copper spraying, the main issue is the early detection for the prevention of spreading. Machine learning algorithms can offer the tools for the on-site classification of healthy and affected leaves, as an initial step towards managing such diseases. The purpose of this study was to build a robust convolutional neural network (CNN) model that is able to classify images of leaves, depending on whether or not these are infected by anthracnose, and therefore determine whether a tree is infected. A set of images were used both in grayscale and RGB mode, a fast Fourier transform was implemented for feature extraction, and a CNN architecture was selected based on its performance. Finally, the best performing method was compared with state-of-the-art convolutional neural network architectures.


2021 ◽  
Vol 11 (1) ◽  
pp. 25-32
Author(s):  
Qi Xin ◽  
Shaohai Hu ◽  
Shuaiqi Liu ◽  
Xiaole Ma ◽  
Hui Lv ◽  
...  

Clinical Electroencephalogram (EEG) data is of great significance to realize automatable detection, recognition and diagnosis to reduce the valuable diagnosis time. To make a classification of epilepsy, we constructed convolution support vector machine (CSVM) by integrating the advantages of convolutional neural networks (CNN) and support vector machine (SVM). To distinguish the focal and non-focal epilepsy EEG signals, we firstly reduced the dimensionality of EEG signals by using principal component analysis (PCA). After that, we classified the epilepsy EEG signals by the CSVM. The accuracy, sensitivity and specificity of our method reach up to 99.56%, 99.72% and 99.52% respectively, which are competitive than the widely acceptable algorithms. The proposed automatic end to end epilepsy EEG signals classification algorithm provides a better reference for clinical epilepsy diagnosis.


2020 ◽  
Vol 6 (11) ◽  
pp. 127
Author(s):  
Ibrahem Kandel ◽  
Mauro Castelli ◽  
Aleš Popovič

The classification of the musculoskeletal images can be very challenging, mostly when it is being done in the emergency room, where a decision must be made rapidly. The computer vision domain has gained increasing attention in recent years, due to its achievements in image classification. The convolutional neural network (CNN) is one of the latest computer vision algorithms that achieved state-of-the-art results. A CNN requires an enormous number of images to be adequately trained, and these are always scarce in the medical field. Transfer learning is a technique that is being used to train the CNN by using fewer images. In this paper, we study the appropriate method to classify musculoskeletal images by transfer learning and by training from scratch. We applied six state-of-the-art architectures and compared their performance with transfer learning and with a network trained from scratch. From our results, transfer learning did increase the model performance significantly, and, additionally, it made the model less prone to overfitting.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 66941-66950 ◽  
Author(s):  
Ji-Hoon Jeong ◽  
Byeong-Hoo Lee ◽  
Dae-Hyeok Lee ◽  
Yong-Deok Yun ◽  
Seong-Whan Lee

2021 ◽  
Author(s):  
Richardson Santiago Teles Menezes ◽  
Angelo Marcelino Cordeiro ◽  
Rafael Magalhães ◽  
Helton Maia

In this paper, state-of-the-art architectures of Convolutional Neural Networks (CNNs) are explained and compared concerning authorship classification of famous paintings. The chosen CNNs architectures were VGG-16, VGG-19, Residual Neural Networks (ResNet), and Xception. The used dataset is available on the website Kaggle, under the title “Best Artworks of All Time”. Weighted classes for each artist with more than 200 paintings present in the dataset were created to represent and classify each artist’s style. The performed experiments resulted in an accuracy of up to 95% for the Xception architecture with an average F1-score of 0.87, 92% of accuracy with an average F1-score of 0.83 for the ResNet in its 50-layer configuration, while both of the VGG architectures did not present satisfactory results for the same amount of epochs, achieving at most 60% of accuracy.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document