Automatic Segmentation and Classification of COVID-19 CT Image Using Deep Learning and Multi-Scale Recurrent Neural Network Based Classifier

2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.

2021 ◽  
pp. 1063293X2110214
Author(s):  
RT Subhalakshmi ◽  
S Appavu alias Balamurugan ◽  
S Sasikala

Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.


Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


2020 ◽  
Vol 10 (18) ◽  
pp. 6381 ◽  
Author(s):  
Pongsathon Janyoi ◽  
Pusadee Seresangtakul

The modeling of fundamental frequency (F0) in speech synthesis is a critical factor affecting the intelligibility and naturalness of synthesized speech. In this paper, we focus on improving the modeling of F0 for Isarn speech synthesis. We propose the F0 model for this based on a recurrent neural network (RNN). Sampled values of F0 are used at the syllable level of continuous Isarn speech combined with their dynamic features to represent supra-segmental properties of the F0 contour. Different architectures of the deep RNNs and different combinations of linguistic features are analyzed to obtain conditions for the best performance. To assess the proposed method, we compared it with several RNN-based baselines. The results of objective and subjective tests indicate that the proposed model significantly outperformed the baseline RNN model that predicts values of F0 at the frame level, and the baseline RNN model that represents the F0 contours of syllables by using discrete cosine transform.


2021 ◽  
Author(s):  
Soojin Lee ◽  
Ramy Hussein ◽  
Rabab Ward ◽  
Jane Z Wang ◽  
Martin J. McKeown

Background: Parkinsons disease (PD) is expected to become more common, particularly with an aging population. Diagnosis and monitoring of the disease typically rely on the laborious examination of physical symptoms by medical experts, which is necessarily limited and may not detect the prodromal stages of the disease. New Method: We propose a lightweight (20K parameters) deep learning model, to discriminate between resting-state EEG recorded from people with PD and healthy controls. The proposed CRNN model consists of convolutional neural networks (CNN) and a recurrent neural network (RNN) with gated recurrent units (GRUs). The 1D CNN layers are designed to extract spatiotemporal features across EEG channels, which are subsequently supplied to the GRUs to discover temporal features pertinent to the classification. Results: The CRNN model achieved 99.2% accuracy, 98.9% precision, and 99.4% recall in classifying PD from healthy controls (HC). Interrogating the model, we further demonstrate that the model is sensitive to dopaminergic medication effects and predominantly uses phase information of the EEG signals. Comparison with Existing Methods: The CRNN model achieves superior performance compared to baseline machine learning methods and other recently proposed deep learning models. Conclusion: The approach proposed in this study adequately extracts the spatial and temporal features in multi-channel EEG signals that enable the accurate differentiation between PD and HC. It has excellent potential for use as an oscillatory biomarker for assisting in the diagnosis and monitoring of people with PD. Future studies to further improve and validate the model performance in clinical practice are warranted.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3608
Author(s):  
Chiao-Sheng Wang ◽  
I-Hsi Kao ◽  
Jau-Woei Perng

The early diagnosis of a motor is important. Many researchers have used deep learning to diagnose motor applications. This paper proposes a one-dimensional convolutional neural network for the diagnosis of permanent magnet synchronous motors. The one-dimensional convolutional neural network model is weakly supervised and consists of multiple convolutional feature-extraction modules. Through the analysis of the torque and current signals of the motors, the motors can be diagnosed under a wide range of speeds, variable loads, and eccentricity effects. The advantage of the proposed method is that the feature-extraction modules can extract multiscale features from complex conditions. The number of training parameters was reduced so as to solve the overfitting problem. Furthermore, the class feature map was proposed to automatically determine the frequency component that contributes to the classification using the weak learning method. The experimental results reveal that the proposed model can effectively diagnose three different motor states—healthy state, demagnetization fault state, and bearing fault state. In addition, the model can detect eccentric effects. By combining the current and torque features, the classification accuracy of the proposed model is up to 98.85%, which is higher than that of classical machine-learning methods such as the k-nearest neighbor and support vector machine.


Author(s):  
Murali Kanthi ◽  
Thogarcheti Hitendra Sarma ◽  
Chigarapalle Shoba Bindu

Deep Learning methods are state-of-the-art approaches for pixel-based hyperspectral images (HSI) classification. High classification accuracy has been achieved by extracting deep features from both spatial-spectral channels. However, the efficiency of such spatial-spectral approaches depends on the spatial dimension of each patch and there is no theoretically valid approach to find the optimum spatial dimension to be considered. It is more valid to extract spatial features by considering varying neighborhood scales in spatial dimensions. In this regard, this article proposes a deep convolutional neural network (CNN) model wherein three different multi-scale spatial-spectral patches are used to extract the features in both the spatial and spectral channels. In order to extract these potential features, the proposed deep learning architecture takes three patches various scales in spatial dimension. 3D convolution is performed on each selected patch and the process runs through entire image. The proposed is named as multi-scale three-dimensional convolutional neural network (MS-3DCNN). The efficiency of the proposed model is being verified through the experimental studies on three publicly available benchmark datasets including Pavia University, Indian Pines, and Salinas. It is empirically proved that the classification accuracy of the proposed model is improved when compared with the remaining state-of-the-art methods.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 81
Author(s):  
Jianbin Xiong ◽  
Dezheng Yu ◽  
Shuangyin Liu ◽  
Lei Shu ◽  
Xiaochan Wang ◽  
...  

Plant phenotypic image recognition (PPIR) is an important branch of smart agriculture. In recent years, deep learning has achieved significant breakthroughs in image recognition. Consequently, PPIR technology that is based on deep learning is becoming increasingly popular. First, this paper introduces the development and application of PPIR technology, followed by its classification and analysis. Second, it presents the theory of four types of deep learning methods and their applications in PPIR. These methods include the convolutional neural network, deep belief network, recurrent neural network, and stacked autoencoder, and they are applied to identify plant species, diagnose plant diseases, etc. Finally, the difficulties and challenges of deep learning in PPIR are discussed.


Sign in / Sign up

Export Citation Format

Share Document