scholarly journals End-to-end COVID-19 screening with 3D deep learning on chest computed tomography

Author(s):  
Kun Yang ◽  
Xinfeng Liu ◽  
Yingli Yang ◽  
Xiangjun Liao ◽  
Rongpin Wang ◽  
...  

Abstract The outbreak of an acute respiratory syndrome (called novel coronavirus pneumonia, NCP) caused by SARS-CoV-2 virus has now progressed to a pandemic, and became the most common threat to public death worldwide[i],[ii]. COVID-19 screening using computed tomography (CT) can perform a quick diagnosis and identify high-risk NCP patients[iii]. Automated screening using CT volumes is a challenging task owing to inter-grader variability and high false-positive and false-negative rates. We propose a three dimensional (3D) deep learning convolutional neural networks (CNN) that use a patient’s CT volume to predict the risk of COVID-19, trained end-to-end from CT volumes directly, using only images and disease labels as inputs. Our model achieves a state-of-the-art performance (95.78% overall accuracy, 99.4% area under the curve) on a dataset of 1,684 COVID-19 patients, nearly twice larger than previous datasets3, and performs similarly on an independent clinical validation set of 121 cases. We tested its performance against six radiologists on clinical confirmed patient’ CT volumes, our model outperformed all six radiologists with absolute reductions of 7% in false positives and 35.9% in false negatives, demonstrating artificial intelligence (AI) capable to optimize the COVID-19 screening process via computer assistance and automation with a level of competence comparable to radiologists. While the vast majority of patients remain unscreened, we show the potential for AI to increase the accuracy and consistency of COVID-19 screening with CT.

2019 ◽  
Vol 25 (6) ◽  
pp. 954-961 ◽  
Author(s):  
Diego Ardila ◽  
Atilla P. Kiraly ◽  
Sujeeth Bharadwaj ◽  
Bokyung Choi ◽  
Joshua J. Reicher ◽  
...  

2019 ◽  
Vol 25 (8) ◽  
pp. 1319-1319 ◽  
Author(s):  
Diego Ardila ◽  
Atilla P. Kiraly ◽  
Sujeeth Bharadwaj ◽  
Bokyung Choi ◽  
Joshua J. Reicher ◽  
...  

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Kwang-Hyun Uhm ◽  
Seung-Won Jung ◽  
Moon Hyung Choi ◽  
Hong-Kyu Shin ◽  
Jae-Ik Yoo ◽  
...  

AbstractIn 2020, it is estimated that 73,750 kidney cancer cases were diagnosed, and 14,830 people died from cancer in the United States. Preoperative multi-phase abdominal computed tomography (CT) is often used for detecting lesions and classifying histologic subtypes of renal tumor to avoid unnecessary biopsy or surgery. However, there exists inter-observer variability due to subtle differences in the imaging features of tumor subtypes, which makes decisions on treatment challenging. While deep learning has been recently applied to the automated diagnosis of renal tumor, classification of a wide range of subtype classes has not been sufficiently studied yet. In this paper, we propose an end-to-end deep learning model for the differential diagnosis of five major histologic subtypes of renal tumors including both benign and malignant tumors on multi-phase CT. Our model is a unified framework to simultaneously identify lesions and classify subtypes for the diagnosis without manual intervention. We trained and tested the model using CT data from 308 patients who underwent nephrectomy for renal tumors. The model achieved an area under the curve (AUC) of 0.889, and outperformed radiologists for most subtypes. We further validated the model on an independent dataset of 184 patients from The Cancer Imaging Archive (TCIA). The AUC for this dataset was 0.855, and the model performed comparably to the radiologists. These results indicate that our model can achieve similar or better diagnostic performance than radiologists in differentiating a wide range of renal tumors on multi-phase CT.


2021 ◽  
Author(s):  
Sang-Heon Lim ◽  
Young Jae Kim ◽  
Yeon-Ho Park ◽  
Doojin Kim ◽  
Kwang Gi Kim ◽  
...  

Abstract Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1,006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the Cancer Imaging Archive (TCIA) pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Pierre Ambrosini ◽  
Eva Hollemans ◽  
Charlotte F. Kweldam ◽  
Geert J. L. H. van Leenders ◽  
Sjoerd Stallinga ◽  
...  

Abstract Cribriform growth patterns in prostate carcinoma are associated with poor prognosis. We aimed to introduce a deep learning method to detect such patterns automatically. To do so, convolutional neural network was trained to detect cribriform growth patterns on 128 prostate needle biopsies. Ensemble learning taking into account other tumor growth patterns during training was used to cope with heterogeneous and limited tumor tissue occurrences. ROC and FROC analyses were applied to assess network performance regarding detection of biopsies harboring cribriform growth pattern. The ROC analysis yielded a mean area under the curve up to 0.81. FROC analysis demonstrated a sensitivity of 0.9 for regions larger than $${0.0150}\,\hbox {mm}^{2}$$ 0.0150 mm 2 with on average 7.5 false positives. To benchmark method performance for intra-observer annotation variability, false positive and negative detections were re-evaluated by the pathologists. Pathologists considered 9% of the false positive regions as cribriform, and 11% as possibly cribriform; 44% of the false negative regions were not annotated as cribriform. As a final experiment, the network was also applied on a dataset of 60 biopsy regions annotated by 23 pathologists. With the cut-off reaching highest sensitivity, all images annotated as cribriform by at least 7/23 of the pathologists, were all detected as cribriform by the network and 9/60 of the images were detected as cribriform whereas no pathologist labelled them as such. In conclusion, the proposed deep learning method has high sensitivity for detecting cribriform growth patterns at the expense of a limited number of false positives. It can detect cribriform regions that are labelled as such by at least a minority of pathologists. Therefore, it could assist clinical decision making by suggesting suspicious regions.


Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 419 ◽  
Author(s):  
Irfan Ullah Khan ◽  
Nida Aslam

The emergence and outbreak of the novel coronavirus (COVID-19) had a devasting effect on global health, the economy, and individuals’ daily lives. Timely diagnosis of COVID-19 is a crucial task, as it reduces the risk of pandemic spread, and early treatment will save patients’ life. Due to the time-consuming, complex nature, and high false-negative rate of the gold-standard RT-PCR test used for the diagnosis of COVID-19, the need for an additional diagnosis method has increased. Studies have proved the significance of X-ray images for the diagnosis of COVID-19. The dissemination of deep-learning techniques on X-ray images can automate the diagnosis process and serve as an assistive tool for radiologists. In this study, we used four deep-learning models—DenseNet121, ResNet50, VGG16, and VGG19—using the transfer-learning concept for the diagnosis of X-ray images as COVID-19 or normal. In the proposed study, VGG16 and VGG19 outperformed the other two deep-learning models. The study achieved an overall classification accuracy of 99.3%.


Author(s):  
A. Nurunnabi ◽  
F. N. Teferle ◽  
J. Li ◽  
R. C. Lindenbergh ◽  
A. Hunegnaw

Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Ilker Ozsahin ◽  
Boran Sekeroglu ◽  
Musa Sani Musa ◽  
Mubarak Taiwo Mustapha ◽  
Dilber Uzun Ozsahin

The COVID-19 diagnostic approach is mainly divided into two broad categories, a laboratory-based and chest radiography approach. The last few months have witnessed a rapid increase in the number of studies use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). In this study, we review the diagnosis of COVID-19 by using chest CT toward AI. We searched ArXiv, MedRxiv, and Google Scholar using the terms “deep learning”, “neural networks”, “COVID-19”, and “chest CT”. At the time of writing (August 24, 2020), there have been nearly 100 studies and 30 studies among them were selected for this review. We categorized the studies based on the classification tasks: COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and severity. The sensitivity, specificity, precision, accuracy, area under the curve, and F1 score results were reported as high as 100%, 100%, 99.62, 99.87%, 100%, and 99.5%, respectively. However, the presented results should be carefully compared due to the different degrees of difficulty of different classification tasks.


Sign in / Sign up

Export Citation Format

Share Document