scholarly journals Deep Convolutional Neural Network Model for Automated Diagnosis of Schizophrenia Using EEG Signals

2019 ◽  
Vol 9 (14) ◽  
pp. 2870 ◽  
Author(s):  
Shu Lih Oh ◽  
Jahmunah Vicnesh ◽  
Edward J Ciaccio ◽  
Rajamanickam Yuvaraj ◽  
U Rajendra Acharya

A computerized detection system for the diagnosis of Schizophrenia (SZ) using a convolutional neural system is described in this study. Schizophrenia is an anomaly in the brain characterized by behavioral symptoms such as hallucinations and disorganized speech. Electroencephalograms (EEG) indicate brain disorders and are prominently used to study brain diseases. We collected EEG signals from 14 healthy subjects and 14 SZ patients and developed an eleven-layered convolutional neural network (CNN) model to analyze the signals. Conventional machine learning techniques are often laborious and subject to intra-observer variability. Deep learning algorithms that have the ability to automatically extract significant features and classify them are thus employed in this study. Features are extracted automatically at the convolution stage, with the most significant features extracted at the max-pooling stage, and the fully connected layer is utilized to classify the signals. The proposed model generated classification accuracies of 98.07% and 81.26% for non-subject based testing and subject based testing, respectively. The developed model can likely aid clinicians as a diagnostic tool to detect early stages of SZ.

Author(s):  
Kuldeep Singh ◽  
Sukhjeet Singh ◽  
Jyoteesh Malhotra

Schizophrenia is a fatal mental disorder, which affects millions of people globally by the disturbance in their thinking, feeling and behaviour. In the age of the internet of things assisted with cloud computing and machine learning techniques, the computer-aided diagnosis of schizophrenia is essentially required to provide its patients with an opportunity to own a better quality of life. In this context, the present paper proposes a spectral features based convolutional neural network (CNN) model for accurate identification of schizophrenic patients using spectral analysis of multichannel EEG signals in real-time. This model processes acquired EEG signals with filtering, segmentation and conversion into frequency domain. Then, given frequency domain segments are divided into six distinct spectral bands like delta, theta-1, theta-2, alpha, beta and gamma. The spectral features including mean spectral amplitude, spectral power and Hjorth descriptors (Activity, Mobility and Complexity) are extracted from each band. These features are independently fed to the proposed spectral features-based CNN and long short-term memory network (LSTM) models for classification. This work also makes use of raw time-domain and frequency-domain EEG segments for classification using temporal CNN and spectral CNN models of same architectures respectively. The overall analysis of simulation results of all models exhibits that the proposed spectral features based CNN model is an efficient technique for accurate and prompt identification of schizophrenic patients among healthy individuals with average classification accuracies of 94.08% and 98.56% for two different datasets with optimally small classification time.


Author(s):  
Gyanendra K. Verma ◽  
Pragya Gupta

Monitoring wild animals became easy due to camera trap network, a technique to explore wildlife using automatically triggered camera on the presence of wild animal and yields a large volume of multimedia data. Wild animal detection is a dynamic research field since the last several decades. In this paper, we propose a wild animal detection system to monitor wildlife and detect wild animals from highly cluttered natural images. The data acquired from the camera-trap network comprises of scenes that are highly cluttered that poses a challenge for detection of wild animals bringing about low recognition rates and high false discovery rates. To deal with the issue, we have utilized a camera trap database that provides candidate regions utilizing multilevel graph cut in the spatiotemporal area. The regions are utilized to make a validation stage that recognizes whether animals are present or not in a scene. These features from cluttered images are extracted using Deep Convolutional Neural Network (CNN). We have implemented the system using two prominent CNN models namely VGGNet and ResNet, on standard camera trap database. Finally, the CNN features fed to some of the best in class machine learning techniques for classification. Our outcomes demonstrate that our proposed system is superior compared to existing systems reported in the literature.


2019 ◽  
Vol 24 (3) ◽  
pp. 220-228
Author(s):  
Gusti Alfahmi Anwar ◽  
Desti Riminarsih

Panthera merupakan genus dari keluarga kucing yang memiliki empat spesies popular yaitu, harimau, jaguar, macan tutul, singa. Singa memiliki warna keemasan dan tidak memilki motif, harimau memiliki motif loreng dengan garis-garis panjang, jaguar memiliki tubuh yang lebih besar dari pada macan tutul serta memiliki motif tutul yang lebih lebar, sedangkan macan tutul memiliki tubuh yang sedikit lebih ramping dari pada jaguar dan memiliki tutul yang tidak terlalu lebar. Pada penelitian ini dilakukan klasifikasi genus panther yaitu harimau, jaguar, macan tutul, dan singa menggunakan metode Convolutional Neural Network. Model Convolutional Neural Network yang digunakan memiliki 1 input layer, 5 convolution layer, dan 2 fully connected layer. Dataset yang digunakan berupa citra harimau, jaguar, macan tutul, dan singa. Data training terdiri dari 3840 citra, data validasi sebanyak 960 citra, dan data testing sebanyak 800 citra. Hasil akurasi dari pelatihan model untuk training yaitu 92,31% dan validasi yaitu 81,88%, pengujian model menggunakan dataset testing mendapatan hasil 68%. Hasil akurasi prediksi didapatkan dari nilai F1-Score pada pengujian didapatkan sebesar 78% untuk harimau, 70% untuk jaguar, 37% untuk macan tutul, 74% untuk singa. Macan tutul mendapatkan akurasi terendah dibandingkan 3 hewan lainnya tetapi lebih baik dibandingkan hasil penelitian sebelumnya.


Genes ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 1155
Author(s):  
Naeem Islam ◽  
Jaebyung Park

RNA modification is vital to various cellular and biological processes. Among the existing RNA modifications, N6-methyladenosine (m6A) is considered the most important modification owing to its involvement in many biological processes. The prediction of m6A sites is crucial because it can provide a better understanding of their functional mechanisms. In this regard, although experimental methods are useful, they are time consuming. Previously, researchers have attempted to predict m6A sites using computational methods to overcome the limitations of experimental methods. Some of these approaches are based on classical machine-learning techniques that rely on handcrafted features and require domain knowledge, whereas other methods are based on deep learning. However, both methods lack robustness and yield low accuracy. Hence, we develop a branch-based convolutional neural network and a novel RNA sequence representation. The proposed network automatically extracts features from each branch of the designated inputs. Subsequently, these features are concatenated in the feature space to predict the m6A sites. Finally, we conduct experiments using four different species. The proposed approach outperforms existing state-of-the-art methods, achieving accuracies of 94.91%, 94.28%, 88.46%, and 94.8% for the H. sapiens, M. musculus, S. cerevisiae, and A. thaliana datasets, respectively.


2021 ◽  
pp. 1-10
Author(s):  
Chien-Cheng Leea ◽  
Zhongjian Gao ◽  
Xiu-Chi Huanga

This paper proposes a Wi-Fi-based indoor human detection system using a deep convolutional neural network. The system detects different human states in various situations, including different environments and propagation paths. The main improvements proposed by the system is that there is no cameras overhead and no sensors are mounted. This system captures useful amplitude information from the channel state information and converts this information into an image-like two-dimensional matrix. Next, the two-dimensional matrix is used as an input to a deep convolutional neural network (CNN) to distinguish human states. In this work, a deep residual network (ResNet) architecture is used to perform human state classification with hierarchical topological feature extraction. Several combinations of datasets for different environments and propagation paths are used in this study. ResNet’s powerful inference simplifies feature extraction and improves the accuracy of human state classification. The experimental results show that the fine-tuned ResNet-18 model has good performance in indoor human detection, including people not present, people still, and people moving. Compared with traditional machine learning using handcrafted features, this method is simple and effective.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 119
Author(s):  
Tao Wang ◽  
Changhua Lu ◽  
Yining Sun ◽  
Mei Yang ◽  
Chun Liu ◽  
...  

Early detection of arrhythmia and effective treatment can prevent deaths caused by cardiovascular disease (CVD). In clinical practice, the diagnosis is made by checking the electrocardiogram (ECG) beat-by-beat, but this is usually time-consuming and laborious. In the paper, we propose an automatic ECG classification method based on Continuous Wavelet Transform (CWT) and Convolutional Neural Network (CNN). CWT is used to decompose ECG signals to obtain different time-frequency components, and CNN is used to extract features from the 2D-scalogram composed of the above time-frequency components. Considering the surrounding R peak interval (also called RR interval) is also useful for the diagnosis of arrhythmia, four RR interval features are extracted and combined with the CNN features to input into a fully connected layer for ECG classification. By testing in the MIT-BIH arrhythmia database, our method achieves an overall performance of 70.75%, 67.47%, 68.76%, and 98.74% for positive predictive value, sensitivity, F1-score, and accuracy, respectively. Compared with existing methods, the overall F1-score of our method is increased by 4.75~16.85%. Because our method is simple and highly accurate, it can potentially be used as a clinical auxiliary diagnostic tool.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Author(s):  
xu chen ◽  
Shibo Wang ◽  
Houguang Liu ◽  
Jianhua Yang ◽  
Songyong Liu ◽  
...  

Abstract Many data-driven coal gangue recognition (CGR) methods based on the vibration or sound of collapsed coal and gangue have been proposed to achieve automatic CGR, which is important for realizing intelligent top-coal caving. However, the strong background noise and complex environment in underground coal mines render this task challenging in practical applications. Inspired by the fact that workers distinguish coal and gangue from underground noise by listening to the hydraulic support sound, we propose an auditory model based CGR method that simulates human auditory recognition by combining an auditory spectrogram with a convolutional neural network (CNN). First, we adjust the characteristic frequency (CF) distribution of the auditory peripheral model (APM) based on the spectral characteristics of collapsed sound signals from coal and gangue and then process the sound signals using the adjusted APM to obtain inferior colliculus auditory signals with multiple CFs. Subsequently, the auditory signals of all CFs are converted into gray images separately and then concatenated into a multichannel auditory spectrum along the channel dimension. Finally, we input the multichannel auditory spectrum as a feature map to the two-dimensional CNN, whose convolutional layers are used to automatically extract features, and the fully connected layer and softmax layer are used to flatten features and predict the recognition result, respectively. The CNN is optimized for the CGR based on a comparison study of four typical types of CNN structures with different network training hyperparameters. The experimental results show that this method affords an accurate CGR with a recognition accuracy of 99.5%. Moreover, this method offers excellent noise immunity compared with typically used CGR methods under various noisy conditions.


Sign in / Sign up

Export Citation Format

Share Document