Automated Detection and Classification of COVID-19 from Chest X-ray Images Using Deep Learning

2020 ◽  
Vol 17 (12) ◽  
pp. 5457-5463
Author(s):  
K. Shankar ◽  
Eswaran Perumal

In recent times, COVID-19 has appeared as a major threat to healthcare professionals, governments, and research communities over the world from its diagnosis to medication. Several research works have been carried out for obtaining the possible solutions for controlling the epidemic proficiently. An effective diagnosis of COVID-19 has been carried out using computed tomography (CT) scans and X-rays to examine the lung image. But it necessitates diverse radiologists and time to examine every report, which is a tedious task. Therefore, this paper presents an automated deep learning (DL) based COVID-19 detection and classification model. The presented model performs preprocessing, feature extraction and classification. In the earlier stage, median filtering (MF) technique is applied to preprocess the input image. Next, convolutional neural network (CNN) based VGGNet-19 model is applied as a feature extractor. At last, artificial neural network (ANN) is employed as a classification model to identify and classify the existence of COVID-19. An extensive set of simulation analysis takes place to ensure the superior performance of the applied model. The outcome of the experiments showcased the betterment interms of different measures.

Author(s):  
V. N. Manjunath Aradhya ◽  
Mufti Mahmud ◽  
D. S. Guru ◽  
Basant Agarwal ◽  
M. Shamim Kaiser

AbstractCoronavirus disease (COVID-19) has infected over more than 28.3 million people around the globe and killed 913K people worldwide as on 11 September 2020. With this pandemic, to combat the spreading of COVID-19, effective testing methodologies and immediate medical treatments are much required. Chest X-rays are the widely available modalities for immediate diagnosis of COVID-19. Hence, automation of detection of COVID-19 from chest X-ray images using machine learning approaches is of greater demand. A model for detecting COVID-19 from chest X-ray images is proposed in this paper. A novel concept of cluster-based one-shot learning is introduced in this work. The introduced concept has an advantage of learning from a few samples against learning from many samples in case of deep leaning architectures. The proposed model is a multi-class classification model as it classifies images of four classes, viz., pneumonia bacterial, pneumonia virus, normal, and COVID-19. The proposed model is based on ensemble of Generalized Regression Neural Network (GRNN) and Probabilistic Neural Network (PNN) classifiers at decision level. The effectiveness of the proposed model has been demonstrated through extensive experimentation on a publicly available dataset consisting of 306 images. The proposed cluster-based one-shot learning has been found to be more effective on GRNN and PNN ensembled model to distinguish COVID-19 images from that of the other three classes. It has also been experimentally observed that the model has a superior performance over contemporary deep learning architectures. The concept of one-shot cluster-based learning is being first of its kind in literature, expected to open up several new dimensions in the field of machine learning which require further researching for various applications.


2021 ◽  
pp. 1-28
Author(s):  
Aakanksha Sharaff ◽  
Ramya Allenki ◽  
Rakhi Seth

Sentiment analysis works on the principle of categorizing and identifying the text-based content and the process of classifying documents into one of the predefined classes commonly known as text classification. Hackers deploy a strategy by sending malicious content as an advertisement link and attack the user system to gain information. For protecting the system from this type of phishing attack, one needs to classify the spam data. This chapter is based on a discussion and comparison of various classification models that are used for phishing SMS detection through sentiment analysis. In this chapter, SMS data is collected from Kaggle, which is classified as ham or spam; while implementing the deep learning techniques like Convolutional Neural Network (CNN), CNN with 7 layers, and CNN with 11 layers, different results are generated. For evaluating these results, different machine learning techniques are used as a baseline algorithm like Naive Bayes, Decision Trees, Support Vector Machine (SVM), and Artificial Neural Network (ANN). After evaluation, CNN showed the highest accuracy of 99.47% as a classification model.


2020 ◽  
Vol 34 (03) ◽  
pp. 2594-2601
Author(s):  
Arjun Akula ◽  
Shuai Wang ◽  
Song-Chun Zhu

We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Vol 11 (15) ◽  
pp. 6976
Author(s):  
Miroslav Jaščur ◽  
Marek Bundzel ◽  
Marek Malík ◽  
Anton Dzian ◽  
Norbert Ferenčík ◽  
...  

Certain post-thoracic surgery complications are monitored in a standard manner using methods that employ ionising radiation. A need to automatise the diagnostic procedure has now arisen following the clinical trial of a novel lung ultrasound examination procedure that can replace X-rays. Deep learning was used as a powerful tool for lung ultrasound analysis. We present a novel deep-learning method, automated M-mode classification, to detect the absence of lung sliding motion in lung ultrasound. Automated M-mode classification leverages semantic segmentation to select 2D slices across the temporal dimension of the video recording. These 2D slices are the input for a convolutional neural network, and the output of the neural network indicates the presence or absence of lung sliding in the given time slot. We aggregate the partial predictions over the entire video recording to determine whether the subject has developed post-surgery complications. With a 64-frame version of this architecture, we detected lung sliding on average with a balanced accuracy of 89%, sensitivity of 82%, and specificity of 92%. Automated M-mode classification is suitable for lung sliding detection from clinical lung ultrasound videos. Furthermore, in lung ultrasound videos, we recommend using time windows between 0.53 and 2.13 s for the classification of lung sliding motion followed by aggregation.


Lung cancer is a serious illness which leads to increased mortality rate globally. The identification of lung cancer at the beginning stage is the probable method of improving the survival rate of the patients. Generally, Computed Tomography (CT) scan is applied for finding the location of the tumor and determines the stage of cancer. Existing works has presented an effective diagnosis classification model for CT lung images. This paper designs an effective diagnosis and classification model for CT lung images. The presented model involves different stages namely pre-processing, segmentation, feature extraction and classification. The initial stage includes an adaptive histogram based equalization (AHE) model for image enhancement and bilateral filtering (BF) model for noise removal. The pre-processed images are fed into the second stage of watershed segmentation model for effectively segment the images. Then, a deep learning based Xception model is applied for prominent feature extraction and the classification takes place by the use of logistic regression (LR) classifier. A comprehensive simulation is carried out to ensure the effective classification of the lung CT images using a benchmark dataset. The outcome implied the outstanding performance of the presented model on the applied test images.


10.2196/23230 ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. e23230
Author(s):  
Pei-Fu Chen ◽  
Ssu-Ming Wang ◽  
Wei-Chih Liao ◽  
Lu-Cheng Kuo ◽  
Kuan-Chih Chen ◽  
...  

Background The International Classification of Diseases (ICD) code is widely used as the reference in medical system and billing purposes. However, classifying diseases into ICD codes still mainly relies on humans reading a large amount of written material as the basis for coding. Coding is both laborious and time-consuming. Since the conversion of ICD-9 to ICD-10, the coding task became much more complicated, and deep learning– and natural language processing–related approaches have been studied to assist disease coders. Objective This paper aims at constructing a deep learning model for ICD-10 coding, where the model is meant to automatically determine the corresponding diagnosis and procedure codes based solely on free-text medical notes to improve accuracy and reduce human effort. Methods We used diagnosis records of the National Taiwan University Hospital as resources and apply natural language processing techniques, including global vectors, word to vectors, embeddings from language models, bidirectional encoder representations from transformers, and single head attention recurrent neural network, on the deep neural network architecture to implement ICD-10 auto-coding. Besides, we introduced the attention mechanism into the classification model to extract the keywords from diagnoses and visualize the coding reference for training freshmen in ICD-10. Sixty discharge notes were randomly selected to examine the change in the F1-score and the coding time by coders before and after using our model. Results In experiments on the medical data set of National Taiwan University Hospital, our prediction results revealed F1-scores of 0.715 and 0.618 for the ICD-10 Clinical Modification code and Procedure Coding System code, respectively, with a bidirectional encoder representations from transformers embedding approach in the Gated Recurrent Unit classification model. The well-trained models were applied on the ICD-10 web service for coding and training to ICD-10 users. With this service, coders can code with the F1-score significantly increased from a median of 0.832 to 0.922 (P<.05), but not in a reduced interval. Conclusions The proposed model significantly improved the F1-score but did not decrease the time consumed in coding by disease coders.


Author(s):  
Chao Du ◽  
Chang Liu ◽  
P. Balamurugan ◽  
P. Selvaraj

Artificial intelligence (AI) in healthcare has recently been promising using deep neural networks. It is indeed even been in clinical trials more and more, with positive outcomes. Deep learning is the process of using algorithms to train a neural network model using huge quantities of data to learn how to execute a given task and then make an accurate classification or prediction. Apart from physical health monitoring, such deep learning models can be used for the mental health evaluation of individuals. This study thus designs a deep learning-based mental health monitoring scheme (DL-MHMS) for college students. This model uses the most efficient convolutional neural network (CNN) to classify the mental health status as positive, negative, and normal using the EEG signals collected from college students. The simulation analysis achieves the highest classification accuracy and F1 scores of 97.54% and 98.35%, less sleeping disorder rate of 21.19%, low depression level of 18.11%, reduced suicide attention level of 28.14%, increasing personality development ratio of 97.52%, enhance self-esteem ratio of 98.42%, compared to existing models.


2021 ◽  
Author(s):  
Mohammed Ayub ◽  
SanLinn Kaka

Abstract Manual first-break picking from a large volume of seismic data is extremely tedious and costly. Deployment of machine learning models makes the process fast and cost effective. However, these machine learning models require high representative and effective features for accurate automatic picking. Therefore, First- Break (FB) picking classification model that uses effective minimum number of features and promises performance efficiency is proposed. The variants of Recurrent Neural Networks (RNNs) such as Long ShortTerm Memory (LSTM) and Gated Recurrent Unit (GRU) can retain contextual information from long previous time steps. We deploy this advantage for FB picking as seismic traces are amplitude values of vibration along the time-axis. We use behavioral fluctuation of amplitude as input features for LSTM and GRU. The models are trained on noisy data and tested for generalization on original traces not seen during the training and validation process. In order to analyze the real-time suitability, the performance is benchmarked using accuracy, F1-measure and three other established metrics. We have trained two RNN models and two deep Neural Network models for FB classification using only amplitude values as features. Both LSTM and GRU have the accuracy and F1-measure with a score of 94.20%. With the same features, Convolutional Neural Network (CNN) has an accuracy of 93.58% and F1-score of 93.63%. Again, Deep Neural Network (DNN) model has scores of 92.83% and 92.59% as accuracy and F1-measure, respectively. From the pexperiment results, we see significant superior performance of LSTM and GRU to CNN and DNN when used the same features. For robustness of LSTM and GRU models, the performance is compared with DNN model that is trained using nine features derived from seismic traces and observed that the performance superiority of RNN models. Therefore, it is safe to conclude that RNN models (LSTM and GRU) are capable of classifying the FB events efficiently even by using a minimum number of features that are not computationally expensive. The novelty of our work is the capability of automatic FB classification with the RNN models that incorporate contextual behavioral information without the need for sophisticated feature extraction or engineering techniques that in turn can help in reducing the cost and fostering classification model robust and faster.


Author(s):  
Victoria Wu

Introduction: Scoliosis, an excessive curvature of the spine, affects approximately 1 in 1,000 individuals. As a result, there have formerly been implementations of mandatory scoliosis screening procedures. Screening programs are no longer widely used as the harms often outweigh the benefits; it causes many adolescents to undergo frequent diagnosis X-ray procedure This makes spinal ultrasounds an ideal substitute for scoliosis screening in patients, as it does not expose them to those levels of radiation. Spinal curvatures can be accurately computed from the location of spinal transverse processes, by measuring the vertebral angle from a reference line [1]. However, ultrasound images are less clear than x-ray images, making it difficult to identify the spinal processes. To overcome this, we employ deep learning using a convolutional neural network, which is a powerful tool for computer vision and image classification [2]. Method: A total of 2,752 ultrasound images were recorded from a spine phantom to train a convolutional neural network. Subsequently, we took another recording of 747 images to be used for testing. All the ultrasound images from the scans were then segmented manually, using the 3D Slicer (www.slicer.org) software. Next, the dataset was fed through a convolutional neural network. The network used was a modified version of GoogLeNet (Inception v1), with 2 linearly stacked inception models. This network was chosen because it provided a balance between accurate performance, and time efficient computations. Results: Deep learning classification using the Inception model achieved an accuracy of 84% for the phantom scan.  Conclusion: The classification model performs with considerable accuracy. Better accuracy needs to be achieved, possibly with more available data and improvements in the classification model.  Acknowledgements: G. Fichtinger is supported as a Canada Research Chair in Computer-Integrated Surgery. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 - Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program.    Figure 1: Ultrasound scan containing a transverse process (left), and ultrasound scan containing no transverse process (right).                                Figure 2: Accuracy of classification for training (red) and validation (blue). References:           Ungi T, King F, Kempston M, Keri Z, Lasso A, Mousavi P, Rudan J, Borschneck DP, Fichtinger G. Spinal Curvature Measurement by Tracked Ultrasound Snapshots. Ultrasound in Medicine and Biology, 40(2):447-54, Feb 2014.           Krizhevsky A, Sutskeyer I, Hinton GE. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25:1097-1105. 


Sign in / Sign up

Export Citation Format

Share Document