scholarly journals COVID-FACT: A Fully-Automated Capsule Network-Based Framework for Identification of COVID-19 Cases from Chest CT Scans

2021 ◽  
Vol 4 ◽  
Author(s):  
Shahin Heidarian ◽  
Parnian Afshar ◽  
Nastaran Enshaei ◽  
Farnoosh Naderkhani ◽  
Moezedin Javad Rafiee ◽  
...  

The newly discovered Coronavirus Disease 2019 (COVID-19) has been globally spreading and causing hundreds of thousands of deaths around the world as of its first emergence in late 2019. The rapid outbreak of this disease has overwhelmed health care infrastructures and arises the need to allocate medical equipment and resources more efficiently. The early diagnosis of this disease will lead to the rapid separation of COVID-19 and non-COVID cases, which will be helpful for health care authorities to optimize resource allocation plans and early prevention of the disease. In this regard, a growing number of studies are investigating the capability of deep learning for early diagnosis of COVID-19. Computed tomography (CT) scans have shown distinctive features and higher sensitivity compared to other diagnostic tests, in particular the current gold standard, i.e., the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Current deep learning-based algorithms are mainly developed based on Convolutional Neural Networks (CNNs) to identify COVID-19 pneumonia cases. CNNs, however, require extensive data augmentation and large datasets to identify detailed spatial relations between image instances. Furthermore, existing algorithms utilizing CT scans, either extend slice-level predictions to patient-level ones using a simple thresholding mechanism or rely on a sophisticated infection segmentation to identify the disease. In this paper, we propose a two-stage fully automated CT-based framework for identification of COVID-19 positive cases referred to as the “COVID-FACT”. COVID-FACT utilizes Capsule Networks, as its main building blocks and is, therefore, capable of capturing spatial information. In particular, to make the proposed COVID-FACT independent from sophisticated segmentations of the area of infection, slices demonstrating infection are detected at the first stage and the second stage is responsible for classifying patients into COVID and non-COVID cases. COVID-FACT detects slices with infection, and identifies positive COVID-19 cases using an in-house CT scan dataset, containing COVID-19, community acquired pneumonia, and normal cases. Based on our experiments, COVID-FACT achieves an accuracy of 90.82%, a sensitivity of 94.55%, a specificity of 86.04%, and an Area Under the Curve (AUC) of 0.98, while depending on far less supervision and annotation, in comparison to its counterparts.

2020 ◽  
Author(s):  
Xiao-Yong Zhang ◽  
Ziqi Yu ◽  
Xiaoyang Han ◽  
Botao Zhao ◽  
Yaoyao Zhuo ◽  
...  

Abstract Currently, reliable, robust and ready-to-use CT-based tools for prediction of COVID-19 progression are still lacking. To address this problem, we present DABC-Net, a novel deep learning (DL) tool that combines a 2D U-net for intra-slice spatial information processing, and a recurrent LSTM network to leverage inter-slice context, for automatic volumetric segmentation of lung and pneumonia lesions. We evaluate DABC-Net on more than 10,000 radiologists-labeled CT slices from four different cohorts. Compared to state-of-the-art segmentation tools, DABC-Net is much faster, more robust, and able to estimate segmentation uncertainty. Based only on the first two CT scans within 3 days after admission from 656 longitudinal CT scans, the AUC of our DBAC-Net for disease progression prediction reaches 93%. We release our tool as a GUI for patient-specific prediction of pneumonia progression, to provide clinicians with additional assistance to triage patients at early days after the diagnosis and to optimize the assignment of limited medical resources, which is of particular importance in current critical COVID-19 pandemic.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0242759
Author(s):  
Se Bum Jang ◽  
Suk Hee Lee ◽  
Dong Eun Lee ◽  
Sin-Youl Park ◽  
Jong Kun Kim ◽  
...  

The recent medical applications of deep-learning (DL) algorithms have demonstrated their clinical efficacy in improving speed and accuracy of image interpretation. If the DL algorithm achieves a performance equivalent to that achieved by physicians in chest radiography (CR) diagnoses with Coronavirus disease 2019 (COVID-19) pneumonia, the automatic interpretation of the CR with DL algorithms can significantly reduce the burden on clinicians and radiologists in sudden surges of suspected COVID-19 patients. The aim of this study was to evaluate the efficacy of the DL algorithm for detecting COVID-19 pneumonia on CR compared with formal radiology reports. This is a retrospective study of adult patients that were diagnosed as positive COVID-19 cases based on the reverse transcription polymerase chain reaction among all the patients who were admitted to five emergency departments and one community treatment center in Korea from February 18, 2020 to May 1, 2020. The CR images were evaluated with a publicly available DL algorithm. For reference, CR images without chest computed tomography (CT) scans classified as positive for COVID-19 pneumonia were used given that the radiologist identified ground-glass opacity, consolidation, or other infiltration in retrospectively reviewed CR images. Patients with evidence of pneumonia on chest CT scans were also classified as COVID-19 pneumonia positive outcomes. The overall sensitivity and specificity of the DL algorithm for detecting COVID-19 pneumonia on CR were 95.6%, and 88.7%, respectively. The area under the curve value of the DL algorithm for the detection of COVID-19 with pneumonia was 0.921. The DL algorithm demonstrated a satisfactory diagnostic performance comparable with that of formal radiology reports in the CR-based diagnosis of pneumonia in COVID-19 patients. The DL algorithm may offer fast and reliable examinations that can facilitate patient screening and isolation decisions, which can reduce the medical staff workload during COVID-19 pandemic situations.


Author(s):  
Rohit Ghosh ◽  
Omar Smadi

Pavement distresses lead to pavement deterioration and failure. Accurate identification and classification of distresses helps agencies evaluate the condition of their pavement infrastructure and assists in decision-making processes on pavement maintenance and rehabilitation. The state of the art is automated pavement distress detection using vision-based methods. This study implements two deep learning techniques, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLO) v3, for automated distress detection and classification of high resolution (1,800 × 1,200) three-dimensional (3D) asphalt and concrete pavement images. The training and validation dataset contained 625 images that included distresses manually annotated with bounding boxes representing the location and types of distresses and 798 no-distress images. Data augmentation was performed to enable more balanced representation of class labels and prevent overfitting. YOLO and Faster R-CNN achieved 89.8% and 89.6% accuracy respectively. Precision-recall curves were used to determine the average precision (AP), which is the area under the precision-recall curve. The AP values for YOLO and Faster R-CNN were 90.2% and 89.2% respectively, indicating strong performance for both models. Receiver operating characteristic (ROC) curves were also developed to determine the area under the curve, and the resulting area under the curve values of 0.96 for YOLO and 0.95 for Faster R-CNN also indicate robust performance. Finally, the models were evaluated by developing confusion matrices comparing our proposed model with manual quality assurance and quality control (QA/QC) results performed on automated pavement data. A very high level of match to manual QA/QC, namely 97.6% for YOLO and 96.9% for Faster R-CNN, suggest the proposed methodology has potential as a replacement for manual QA/QC.


2021 ◽  
Vol 11 (10) ◽  
pp. 1008
Author(s):  
Muhammad Owais ◽  
Na Rae Baek ◽  
Kang Ryoung Park

Background: Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is a time-consuming process particularly in case of trivial lesions and requires medical specialists. Method: A recent breakthrough in deep learning methods has boosted the diagnostic capability of computer-aided diagnosis (CAD) systems and further aided health professionals in making effective diagnostic decisions. In this study, we propose a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans. Our network design achieves a fast execution speed (inference time is 43 ms on a single image) with optimal memory consumption (almost 9 MB). To evaluate the performances of the proposed and state-of-the-art models, we considered two publicly accessible datasets, namely COVID-19-CT-Seg (comprising a total of 3520 images of 20 different patients) and MosMed (including a total of 2049 images of 50 different patients). Results: Our method exhibits average area under the curve (AUC) up to 98.84%, 98.47%, and 95.51% for COVID-19-CT-Seg, MosMed, and cross-dataset, respectively, and outperforms various state-of-the-art methods. Conclusions: These results demonstrate that deep learning-based models are an effective tool for building a robust CAD solution based on CT data in response to present disaster of COVID-19.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Munetoshi Hinata ◽  
Tetsuo Ushiku

AbstractImmune checkpoint inhibitor (ICI) therapy is widely used but effective only in a subset of gastric cancers. Epstein–Barr virus (EBV)-positive and microsatellite instability (MSI) / mismatch repair deficient (dMMR) tumors have been reported to be highly responsive to ICIs. However, detecting these subtypes requires costly techniques, such as immunohistochemistry and molecular testing. In the present study, we constructed a histology-based deep learning model that aimed to screen this immunotherapy-sensitive subgroup efficiently. We processed whole slide images of 408 cases of gastric adenocarcinoma, including 108 EBV, 58 MSI/dMMR, and 242 other subtypes. Many images generated by data augmentation of the learning set were used for training convolutional neural networks to establish an automatic detection platform for EBV and MSI/dMMR subtypes, and the test sets of images were used to verify the learning outcome. Our model detected the subgroup (EBV + MSI/dMMR tumors) with high accuracy in test cases with an area under the curve of 0.947 (0.901–0.992). This result was slightly better than when EBV and MSI/dMMR tumors were detected separately. In an external validation cohort including 244 gastric cancers from The Cancer Genome Atlas database, our model showed a favorable result for detecting the “EBV + MSI/dMMR” subgroup with an AUC of 0.870 (0.809–0.931). In addition, a visualization of the trained neural network highlighted intraepithelial lymphocytosis as the ground for prediction, suggesting that this feature is a discriminative characteristic shared by EBV and MSI/dMMR tumors. Histology-based deep learning models are expected to be used for detecting EBV and MSI/dMMR gastric cancers as economical and less time-consuming alternatives, which may help to effectively stratify patients who respond to ICIs.


Author(s):  
Tomoki Uemura ◽  
Janne J. Näppi ◽  
Yasuji Ryu ◽  
Chinatsu Watari ◽  
Tohru Kamiya ◽  
...  

Abstract Purpose Deep learning can be used for improving the performance of computer-aided detection (CADe) in various medical imaging tasks. However, in computed tomographic (CT) colonography, the performance is limited by the relatively small size and the variety of the available training datasets. Our purpose in this study was to develop and evaluate a flow-based generative model for performing 3D data augmentation of colorectal polyps for effective training of deep learning in CADe for CT colonography. Methods We developed a 3D-convolutional neural network (3D CNN) based on a flow-based generative model (3D Glow) for generating synthetic volumes of interest (VOIs) that has characteristics similar to those of the VOIs of its training dataset. The 3D Glow was trained to generate synthetic VOIs of polyps by use of our clinical CT colonography case collection. The evaluation was performed by use of a human observer study with three observers and by use of a CADe-based polyp classification study with a 3D DenseNet. Results The area-under-the-curve values of the receiver operating characteristic analysis of the three observers were not statistically significantly different in distinguishing between real polyps and synthetic polyps. When trained with data augmentation by 3D Glow, the 3D DenseNet yielded a statistically significantly higher polyp classification performance than when it was trained with alternative augmentation methods. Conclusion The 3D Glow-generated synthetic polyps are visually indistinguishable from real colorectal polyps. Their application to data augmentation can substantially improve the performance of 3D CNNs in CADe for CT colonography. Thus, 3D Glow is a promising method for improving the performance of deep learning in CADe for CT colonography.


Author(s):  
Bolun Lin ◽  
Mosha Cheng ◽  
Shuze Wang ◽  
Fulong Li ◽  
Qing Zhou

Objectives: This study aimed to develop models that can automatically detect anterior disc displacement (ADD) of the temporomandibular joint (TMJ) on magnetic resonance images (MRI) before orthodontic treatment to reduce the risk of developing serious complications after treatment. Methods: We used 9009 sagittal MRI of the TMJ as input and constructed three sets of deep learning models to detect ADD automatically. Deep learning models were developed using a convolutional neural network (CNN) based on the ResNet architecture and the “Imagenet” database. Five-fold cross-validation, over sampling, and data augmentation techniques were applied to reduce the risk of overfitting the model. The accuracy and area under the curve (AUC) of the three models were compared. Results: The performance of the maximum open mouth position model was excellent with accuracy and AUC of 0.970 (±0.007) and 0.990 (±0.005), respectively. For closed mouth position models the accuracy and AUC of diagnostic criteria One were 0.863 (±0.008) and 0.922 (±0.009), respectively significantly higher than that of diagnostic criteria two with an 0.839 (±0.013) (p = 0.009) and AUC of 0.885 (±0.018) (p = 0.003). The classification activation heat map also improved our understanding of the models and visually displayed the areas that play a key role in the model recognition process. Conclusion: Our CNN model resulted in high accuracy and AUC in detecting ADD and can therefore potentially be used by clinicians to assess ADD before orthodontic treatment and hence improve treatment outcomes.


Author(s):  
Felix Erne ◽  
Daniel Dehncke ◽  
Steven C. Herath ◽  
Fabian Springer ◽  
Nico Pfeifer ◽  
...  

Abstract Background Fracture detection by artificial intelligence and especially Deep Convolutional Neural Networks (DCNN) is a topic of growing interest in current orthopaedic and radiological research. As learning a DCNN usually needs a large amount of training data, mostly frequent fractures as well as conventional X-ray are used. Therefore, less common fractures like acetabular fractures (AF) are underrepresented in the literature. The aim of this pilot study was to establish a DCNN for detection of AF using computer tomography (CT) scans. Methods Patients with an acetabular fracture were identified from the monocentric consecutive pelvic injury registry at the BG Trauma Center XXX from 01/2003 – 12/2019. All patients with unilateral AF and CT scans available in DICOM-format were included for further processing. All datasets were automatically anonymised and digitally post-processed. Extraction of the relevant region of interests was performed and the technique of data augmentation (DA) was implemented to artificially increase the number of training samples. A DCNN based on Med3D was used for autonomous fracture detection, using global average pooling (GAP) to reduce overfitting. Results From a total of 2,340 patients with a pelvic fracture, 654 patients suffered from an AF. After screening and post-processing of the datasets, a total of 159 datasets were enrolled for training of the algorithm. A random assignment into training datasets (80%) and test datasets (20%) was performed. The technique of bone area extraction, DA and GAP increased the accuracy of fracture detection from 58.8% (native DCNN) up to an accuracy of 82.8% despite the low number of datasets. Conclusion The accuracy of fracture detection of our trained DCNN is comparable to published values despite the low number of training datasets. The techniques of bone extraction, DA and GAP are useful for increasing the detection rates of rare fractures by a DCNN. Based on the used DCNN in combination with the described techniques from this pilot study, the possibility of an automatic fracture classification of AF is under investigation in a multicentre study.


2020 ◽  
Author(s):  
Xin He ◽  
Shihao Wang ◽  
Shaohuai Shi ◽  
Xiaowen Chu ◽  
Jiangping Tang ◽  
...  

AbstractCOVID-19 pandemic has spread all over the world for months. As its transmissibility and high pathogenicity seriously threaten people’s lives, the accurate and fast detection of the COVID-19 infection is crucial. Although many recent studies have shown that deep learning based solutions can help detect COVID-19 based on chest CT scans, there lacks a consistent and systematic comparison and evaluation on these techniques. In this paper, we first build a clean and segmented CT dataset called Clean-CC-CCII by fixing the errors and removing some noises in a large CT scan dataset CC-CCII with three classes: novel coronavirus pneumonia (NCP), common pneumonia (CP), and normal controls (Normal). After cleaning, our dataset consists of a total of 340,190 slices of 3,993 scans from 2,698 patients. Then we benchmark and compare the performance of a series of state-of-the-art (SOTA) 3D and 2D convolutional neural networks (CNNs). The results show that 3D CNNs outperform 2D CNNs in general. With extensive effort of hyperparameter tuning, we find that the 3D CNN model DenseNet3D121 achieves the highest accuracy of 88.63% (F1-score is 88.14% and AUC is 0.940), and another 3D CNN model ResNet3D34 achieves the best AUC of 0.959 (accuracy is 87.83% and F1-score is 86.04%). We further demonstrate that the mixup data augmentation technique can largely improve the model performance. At last, we design an automated deep learning methodology to generate a lightweight deep learning model MNas3DNet41 that achieves an accuracy of 87.14%, F1-score of 87.25%, and AUC of 0.957, which are on par with the best models made by AI experts. The automated deep learning design is a promising methodology that can help health-care professionals develop effective deep learning models using their private data sets. Our Clean-CC-CCII dataset and source code are available at:https://github.com/arthursdays/HKBU HPML COVID-19.


2020 ◽  
Vol 10 (3) ◽  
pp. 1154 ◽  
Author(s):  
Jean Léger ◽  
Eliott Brion ◽  
Paul Desbordes ◽  
Christophe De Vleeschouwer ◽  
John A. Lee ◽  
...  

For prostate cancer patients, large organ deformations occurring between radiotherapy treatment sessions create uncertainty about the doses delivered to the tumor and surrounding healthy organs. Segmenting those regions on cone beam CT (CBCT) scans acquired on treatment day would reduce such uncertainties. In this work, a 3D U-net deep-learning architecture was trained to segment bladder, rectum, and prostate on CBCT scans. Due to the scarcity of contoured CBCT scans, the training set was augmented with CT scans already contoured in the current clinical workflow. Our network was then tested on 63 CBCT scans. The Dice similarity coefficient (DSC) increased significantly with the number of CBCT and CT scans in the training set, reaching 0.874 ± 0.096 , 0.814 ± 0.055 , and 0.758 ± 0.101 for bladder, rectum, and prostate, respectively. This was about 10% better than conventional approaches based on deformable image registration between planning CT and treatment CBCT scans, except for prostate. Interestingly, adding 74 CT scans to the CBCT training set allowed maintaining high DSCs, while halving the number of CBCT scans. Hence, our work showed that although CBCT scans included artifacts, cross-domain augmentation of the training set was effective and could rely on large datasets available for planning CT scans.


Sign in / Sign up

Export Citation Format

Share Document