scholarly journals Deep learning based fusion model for COVID-19 diagnosis and classification using computed tomography images

2021 ◽  
pp. 1063293X2110214
Author(s):  
RT Subhalakshmi ◽  
S Appavu alias Balamurugan ◽  
S Sasikala

Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.

2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


Author(s):  
S. Vishwa Kiran ◽  
Inderjeet Kaur ◽  
K. Thangaraj ◽  
V. Saveetha ◽  
R. Kingsy Grace ◽  
...  

In recent times, the healthcare industry has been generating a significant amount of data in distinct formats, such as electronic health records (EHR), clinical trials, genetic data, payments, scientific articles, wearables, and care management databases. Data science is useful for analysis (pattern recognition, hypothesis testing, risk valuation) and prediction. The major, primary usage of data science in the healthcare domain is in medical imaging. At the same time, lung cancer diagnosis has become a hot research topic, as automated disease detection poses numerous benefits. Although numerous approaches have existed in the literature for lung cancer diagnosis, the design of a novel model to automatically identify lung cancer is a challenging task. In this view, this paper designs an automated machine learning (ML) with data science-enabled lung cancer diagnosis and classification (MLDS-LCDC) using computed tomography (CT) images. The presented model initially employs Gaussian filtering (GF)-based pre-processing technique on the CT images collected from the lung cancer database. Besides, they are fed into the normalized cuts (Ncuts) technique where the nodule in the pre-processed image can be determined. Moreover, the oriented FAST and rotated BRIEF (ORB) technique is applied as a feature extractor. At last, sunflower optimization-based wavelet neural network (SFO-WNN) model is employed for the classification of lung cancer. In order to examine the diagnostic outcome of the MLDS-LCDC model, a set of experiments were carried out and the results are investigated in terms of different aspects. The resultant values demonstrated the effectiveness of the MLDS-LCDC model over the other state-of-the-art methods with the maximum sensitivity of 97.01%, specificity of 98.64%, and accuracy of 98.11%.


Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 40
Author(s):  
Gyu Sang Yoo ◽  
Huan Minh Luu ◽  
Heejung Kim ◽  
Won Park ◽  
Hongryull Pyo ◽  
...  

We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Frank Li ◽  
Jiwoong Choi ◽  
Chunrui Zou ◽  
John D. Newell ◽  
Alejandro P. Comellas ◽  
...  

AbstractChronic obstructive pulmonary disease (COPD) is a heterogeneous disease and the traditional variables extracted from computed tomography (CT) images may not be sufficient to describe all the topological features of lung tissues in COPD patients. We employed an unsupervised three-dimensional (3D) convolutional autoencoder (CAE)-feature constructor (FC) deep learning network to learn from CT data and derive tissue pattern-clusters jointly. We then applied exploratory factor analysis (EFA) to discover the unobserved latent traits (factors) among pattern-clusters. CT images at total lung capacity (TLC) and residual volume (RV) of 541 former smokers and 59 healthy non-smokers from the cohort of the SubPopulations and Intermediate Outcome Measures in the COPD Study (SPIROMICS) were analyzed. TLC and RV images were registered to calculate the Jacobian (determinant) values for all the voxels in TLC images. 3D Regions of interest (ROIs) with two data channels of CT intensity and Jacobian value were randomly extracted from training images and were fed to the 3D CAE-FC model. 80 pattern-clusters and 7 factors were identified. Factor scores computed for individual subjects were able to predict spirometry-measured pulmonary functions. Two factors which correlated with various emphysema subtypes, parametric response mapping (PRM) metrics, airway variants, and airway tree to lung volume ratio were discriminants of patients across all severity stages. Our findings suggest the potential of developing factor-based surrogate markers for new COPD phenotypes.


Author(s):  
K. Shankar ◽  
Eswaran Perumal

AbstractCOVID-19 pandemic is increasing in an exponential rate, with restricted accessibility of rapid test kits. So, the design and implementation of COVID-19 testing kits remain an open research problem. Several findings attained using radio-imaging approaches recommend that the images comprise important data related to coronaviruses. The application of recently developed artificial intelligence (AI) techniques, integrated with radiological imaging, is helpful in the precise diagnosis and classification of the disease. In this view, the current research paper presents a novel fusion model hand-crafted with deep learning features called FM-HCF-DLF model for diagnosis and classification of COVID-19. The proposed FM-HCF-DLF model comprises three major processes, namely Gaussian filtering-based preprocessing, FM for feature extraction and classification. FM model incorporates the fusion of handcrafted features with the help of local binary patterns (LBP) and deep learning (DL) features and it also utilizes convolutional neural network (CNN)-based Inception v3 technique. To further improve the performance of Inception v3 model, the learning rate scheduler using Adam optimizer is applied. At last, multilayer perceptron (MLP) is employed to carry out the classification process. The proposed FM-HCF-DLF model was experimentally validated using chest X-ray dataset. The experimental outcomes inferred that the proposed model yielded superior performance with maximum sensitivity of 93.61%, specificity of 94.56%, precision of 94.85%, accuracy of 94.08%, F score of 93.2% and kappa value of 93.5%.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-16
Author(s):  
Xiaowe Xu ◽  
Jiawei Zhang ◽  
Jinglan Liu ◽  
Yukun Ding ◽  
Tianchen Wang ◽  
...  

As one of the most commonly ordered imaging tests, the computed tomography (CT) scan comes with inevitable radiation exposure that increases cancer risk to patients. However, CT image quality is directly related to radiation dose, and thus it is desirable to obtain high-quality CT images with as little dose as possible. CT image denoising tries to obtain high-dose-like high-quality CT images (domain Y ) from low dose low-quality CT images (domain X ), which can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). Recently, the cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data, since the paired data is hard to collect due to patients’ interests and cardiac motion. However, out of concerns on patients’ privacy and data security, protocols typically require clinics to perform medical image processing tasks including CT image denoising locally (i.e., edge denoising). Therefore, the network models need to achieve high performance under various computation resource constraints including memory and performance. Our detailed analysis of CCADN raises a number of interesting questions that point to potential ways to further improve its performance using the same or even fewer computation resources. For example, if the noise is large leading to a significant difference between domain X and domain Y , can we bridge X and Y with a intermediate domain Z such that both the denoising process between X and Z and that between Z and Y are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle- consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency for edge denoising of CT images. The global cycle-consistency couples all generators together to model the whole denoising process, whereas the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms CCADN in terms of denoising quality with slightly less computation resource consumption.


Author(s):  
José Denes Lima Araújo ◽  
Luana Batista da Cruz ◽  
João Otávio Bandeira Diniz ◽  
Jonnison Lima Ferreira ◽  
Aristófanes Corrêa Silva ◽  
...  

Lung cancer is a serious illness which leads to increased mortality rate globally. The identification of lung cancer at the beginning stage is the probable method of improving the survival rate of the patients. Generally, Computed Tomography (CT) scan is applied for finding the location of the tumor and determines the stage of cancer. Existing works has presented an effective diagnosis classification model for CT lung images. This paper designs an effective diagnosis and classification model for CT lung images. The presented model involves different stages namely pre-processing, segmentation, feature extraction and classification. The initial stage includes an adaptive histogram based equalization (AHE) model for image enhancement and bilateral filtering (BF) model for noise removal. The pre-processed images are fed into the second stage of watershed segmentation model for effectively segment the images. Then, a deep learning based Xception model is applied for prominent feature extraction and the classification takes place by the use of logistic regression (LR) classifier. A comprehensive simulation is carried out to ensure the effective classification of the lung CT images using a benchmark dataset. The outcome implied the outstanding performance of the presented model on the applied test images.


2021 ◽  
Author(s):  
Hoon Ko ◽  
Jimi Huh ◽  
Kyung Won Kim ◽  
Heewon Chung ◽  
Yousun Ko ◽  
...  

BACKGROUND Detection and quantification of intraabdominal free fluid (i.e., ascites) on computed tomography (CT) are essential processes to find emergent or urgent conditions in patients. In an emergent department, automatic detection and quantification of ascites will be beneficial. OBJECTIVE We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). METHODS 2D deep learning models (DLMs) based on a deep residual U-Net, U-Net, bi-directional U-Net, and recurrent residual U-net were developed to segment areas of ascites on an abdominopelvic CT. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and non-ascites images. The AI algorithms were trained using 6,337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1,635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. RESULTS The segmentation accuracy was the highest in the deep residual U-Net with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bi-directional U-Net, and recurrent residual U-net (mIoU values 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest in the deep residual U-net (0.96), followed by U-Net, bi-directional U-net, and recurrent residual U-net (0.90, 0.88, and 0.82, respectively). The deep residual U-net also achieved high sensitivity (0.96) and high specificity (0.96). CONCLUSIONS We propose the deep residual U-net-based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.


Sign in / Sign up

Export Citation Format

Share Document