Chest Radiographic and Chest CT Images of Aspiration Pneumonia: Are the Image Features of Aspiration Pneumonia Different from Those of Non-aspiration CAP or HAP?

Author(s):  
Kosaku Komiya ◽  
Jun-Ichi Kadota
2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lorena Escudero Sanchez ◽  
Leonardo Rundo ◽  
Andrew B. Gill ◽  
Matthew Hoare ◽  
Eva Mendes Serrao ◽  
...  

AbstractRadiomic image features are becoming a promising non-invasive method to obtain quantitative measurements for tumour classification and therapy response assessment in oncological research. However, despite its increasingly established application, there is a need for standardisation criteria and further validation of feature robustness with respect to imaging acquisition parameters. In this paper, the robustness of radiomic features extracted from computed tomography (CT) images is evaluated for liver tumour and muscle, comparing the values of the features in images reconstructed with two different slice thicknesses of 2.0 mm and 5.0 mm. Novel approaches are presented to address the intrinsic dependencies of texture radiomic features, choosing the optimal number of grey levels and correcting for the dependency on volume. With the optimal values and corrections, feature values are compared across thicknesses to identify reproducible features. Normalisation using muscle regions is also described as an alternative approach. With either method, a large fraction of features (75–90%) was found to be highly robust (< 25% difference). The analyses were performed on a homogeneous CT dataset of 43 patients with hepatocellular carcinoma, and consistent results were obtained for both tumour and muscle tissue. Finally, recommended guidelines are included for radiomic studies using variable slice thickness.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yan Cui ◽  
Yang Sun ◽  
Meng Xia ◽  
Dan Yao ◽  
Jun Lei

This research was aimed to study CT image features based on the backprojection filtering reconstruction algorithm and evaluate the effect of ropivacaine combined with dexamethasone and dexmedetomidine on assisted thoracoscopic lobectomy to provide reference for clinical diagnosis. A total of 110 patients undergoing laparoscopic resection were selected as the study subjects. Anesthesia induction and nerve block were performed with ropivacaine combined with dexamethasone and dexmedetomidine before surgery, and chest CT scan was performed. The backprojection image reconstruction algorithm was constructed and applied to patient CT images for reconstruction processing. The results showed that when the overlapping step size was 16 and the block size was 32 × 32, the running time of the algorithm was the shortest. The resolution and sharpness of reconstructed images were better than the Fourier transform analytical method and iterative reconstruction algorithm. The detection rates of lung nodules smaller than 6 mm and 6–30 mm (92.35% and 95.44%) were significantly higher than those of the Fourier transform analytical method and iterative reconstruction algorithm (90.98% and 87.53%; 88.32% and 90.87%) ( P < 0.05 ). After anesthesia induction and lobectomy with ropivacaine combined with dexamethasone and dexmedetomidine, the visual analogue scale (VAS) decreased with postoperative time. The VAS score decreased to a lower level (1.76 ± 0.54) after five days. In summary, ropivacaine combined with dexamethasone and dexmedetomidine had better sedation and analgesia effects in patients with thoracoscopic lobectomy. CT images based on backprojection reconstruction algorithm had a high recognition accuracy for lung lesions.


2018 ◽  
Vol 8 (3) ◽  
pp. 485-493 ◽  
Author(s):  
Shouren Lan ◽  
Xin Liu ◽  
Lisheng Wang ◽  
Chaoyi Cui

2021 ◽  
Vol 11 ◽  
Author(s):  
He Sui ◽  
Ruhang Ma ◽  
Lin Liu ◽  
Yaozong Gao ◽  
Wenhai Zhang ◽  
...  

ObjectiveTo develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images.MethodsWe retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared.ResultsThe sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively.ConclusionsDeep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.


Sign in / Sign up

Export Citation Format

Share Document