scholarly journals Deep learning-based fully automated Z-axis coverage range definition from scout scans to eliminate overscanning in chest CT imaging

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Yazdan Salimi ◽  
Isaac Shiri ◽  
Azadeh Akhavanallaf ◽  
Zahra Mansouri ◽  
Abdollah Saberi Manesh ◽  
...  

Abstract Background Despite the prevalence of chest CT in the clinic, concerns about unoptimized protocols delivering high radiation doses to patients still remain. This study aimed to assess the additional radiation dose associated with overscanning in chest CT and to develop an automated deep learning-assisted scan range selection technique to reduce radiation dose to patients. Results A significant overscanning range (31 ± 24) mm was observed in clinical setting for over 95% of the cases. The average Dice coefficient for lung segmentation was 0.96 and 0.97 for anterior–posterior (AP) and lateral projections, respectively. By considering the exact lung coverage as the ground truth, and AP and lateral projections as input, The DL-based approach resulted in errors of 0.08 ± 1.46 and − 1.5 ± 4.1 mm in superior and inferior directions, respectively. In contrast, the error on external scout views was − 0.7 ± 4.08 and 0.01 ± 14.97 mm for superior and inferior directions, respectively.The ED reduction achieved by automated scan range selection was 21% in the test group. The evaluation of a large multi-centric chest CT dataset revealed unnecessary ED of more than 2 mSv per scan and 67% increase in the thyroid absorbed dose. Conclusion The proposed DL-based solution outperformed previous automatic methods with acceptable accuracy, even in complicated and challenging cases. The generizability of the model was demonstrated by fine-tuning the model on AP scout views and achieving acceptable results. The method can reduce the unoptimized dose to patients by exclunding unnecessary organs from field of view.

Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Yannan Yu ◽  
Soren Christensen ◽  
Yuan Xie ◽  
Enhao Gong ◽  
Maarten G Lansberg ◽  
...  

Objective: Ischemic core prediction from CT perfusion (CTP) remains inaccurate compared with gold standard diffusion-weighted imaging (DWI). We evaluated if a deep learning model to predict the DWI lesion from MR perfusion (MRP) could facilitate ischemic core prediction on CTP. Method: Using the multi-center CRISP cohort of acute ischemic stroke patient with CTP before thrombectomy, we included patients with major reperfusion (TICI score≥2b), adequate image quality, and follow-up MRI at 3-7 days. Perfusion parameters including Tmax, mean transient time, cerebral blood flow (CBF), and cerebral blood volume were reconstructed by RAPID software. Core lab experts outlined the stroke lesion on the follow-up MRI. A previously trained MRI model in a separate group of patients was used as a starting point, which used MRP parameters as input and RAPID ischemic core on DWI as ground truth. We fine-tuned this model, using CTP parameters as input, and follow-up MRI as ground truth. Another model was also trained from scratch with only CTP data. 5-fold cross validation was used. Performance of the models was compared with ischemic core (rCBF≤30%) from RAPID software to identify the presence of a large infarct (volume>70 or >100ml). Results: 94 patients in the CRISP trial met the inclusion criteria (mean age 67±15 years, 52% male, median baseline NIHSS 18, median 90-day mRS 2). Without fine-tuning, the MRI model had an agreement of 73% in infarct >70ml, and 69% in >100ml; the MRI model fine-tuned on CT improved the agreement to 77% and 73%; The CT model trained from scratch had agreements of 73% and 71%; All of the deep learning models outperformed the rCBF segmentation from RAPID, which had agreements of 51% and 64%. See Table and figure. Conclusions: It is feasible to apply MRP-based deep learning model to CT. Fine-tuning with CTP data further improves the predictions. All deep learning models predict the stroke lesion after major recanalization better than thresholding approaches based on rCBF.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2025
Author(s):  
Jasjit S. Suri ◽  
Sushant Agarwal ◽  
Pranav Elavarthi ◽  
Rajesh Pathak ◽  
Vedmanvitha Ketireddy ◽  
...  

Background: For COVID-19 lung severity, segmentation of lungs on computed tomography (CT) is the first crucial step. Current deep learning (DL)-based Artificial Intelligence (AI) models have a bias in the training stage of segmentation because only one set of ground truth (GT) annotations are evaluated. We propose a robust and stable inter-variability analysis of CT lung segmentation in COVID-19 to avoid the effect of bias. Methodology: The proposed inter-variability study consists of two GT tracers for lung segmentation on chest CT. Three AI models, PSP Net, VGG-SegNet, and ResNet-SegNet, were trained using GT annotations. We hypothesized that if AI models are trained on the GT tracings from multiple experience levels, and if the AI performance on the test data between these AI models is within the 5% range, one can consider such an AI model robust and unbiased. The K5 protocol (training to testing: 80%:20%) was adapted. Ten kinds of metrics were used for performance evaluation. Results: The database consisted of 5000 CT chest images from 72 COVID-19-infected patients. By computing the coefficient of correlations (CC) between the output of the two AI models trained corresponding to the two GT tracers, computing their differences in their CC, and repeating the process for all three AI-models, we show the differences as 0%, 0.51%, and 2.04% (all < 5%), thereby validating the hypothesis. The performance was comparable; however, it had the following order: ResNet-SegNet > PSP Net > VGG-SegNet. Conclusions: The AI models were clinically robust and stable during the inter-variability analysis on the CT lung segmentation on COVID-19 patients.


Author(s):  
T. Wu ◽  
B. Vallet ◽  
M. Pierrot-Deseilligny ◽  
E. Rupnik

Abstract. Stereo dense matching is a fundamental task for 3D scene reconstruction. Recently, deep learning based methods have proven effective on some benchmark datasets, for example Middlebury and KITTI stereo. However, it is not easy to find a training dataset for aerial photogrammetry. Generating ground truth data for real scenes is a challenging task. In the photogrammetry community, many evaluation methods use digital surface models (DSM) to generate the ground truth disparity for the stereo pairs, but in this case interpolation may bring errors in the estimated disparity. In this paper, we publish a stereo dense matching dataset based on ISPRS Vaihingen dataset, and use it to evaluate some traditional and deep learning based methods. The evaluation shows that learning-based methods outperform traditional methods significantly when the fine tuning is done on a similar landscape. The benchmark also investigates the impact of the base to height ratio on the performance of the evaluated methods. The dataset can be found in https://github.com/whuwuteng/benchmark_ISPRS2021.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Paulo Drews-Jr ◽  
Isadora de Souza ◽  
Igor P. Maurell ◽  
Eglen V. Protas ◽  
Silvia S. C. Botelho

AbstractImage segmentation is an important step in many computer vision and image processing algorithms. It is often adopted in tasks such as object detection, classification, and tracking. The segmentation of underwater images is a challenging problem as the water and particles present in the water scatter and absorb the light rays. These effects make the application of traditional segmentation methods cumbersome. Besides that, to use the state-of-the-art segmentation methods to face this problem, which are based on deep learning, an underwater image segmentation dataset must be proposed. So, in this paper, we develop a dataset of real underwater images, and some other combinations using simulated data, to allow the training of two of the best deep learning segmentation architectures, aiming to deal with segmentation of underwater images in the wild. In addition to models trained in these datasets, fine-tuning and image restoration strategies are explored too. To do a more meaningful evaluation, all the models are compared in the testing set of real underwater images. We show that methods obtain impressive results, mainly when trained with our real dataset, comparing with manually segmented ground truth, even using a relatively small number of labeled underwater training images.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Yannan Yu ◽  
Yuan Xie ◽  
Enhao Gong ◽  
Thoralf thamm ◽  
Jiahong Ouyang ◽  
...  

Objective: We investigated if deep learning models are able to define the penumbra and ischemic core by comparing models from two training strategies (with and without pre-training) and clinical thresholding criteria (MRI parameter time-to-peak of the residue function [Tmax] and apparent diffusion coefficient [ADC]). Methods: We selected patients from two multicenter stroke trials, with baseline perfusion-weighted imaging (PWI) and diffusion-weighted imaging (DWI) and 3-7 day T2-FLAIR. Based on reperfusion rate calculated from baseline and 24 hr PWI, patients were grouped into unknown (no 24 hr PWI scan), minimal (≤20%), partial (20%-80%), and major (≥80%) reperfusion. Attention-gated U-net structure was selected for training, with eight image channels from baseline PWI/DWI as inputs and the infarct lesion manually segmented on T2-FLAIR as ground truth. Two training strategies were used: (1) training two models separately in minimal and major reperfusion patients; (2) pre-training a model using patients with partial and unknown reperfusion, then fine-tuning two models using minimal and major reperfusion patients, respectively. Prediction was evaluated by Dice score coefficient (DSC), and lesion volume error at an optimal threshold. In minimal and major reperfusion patients, the deep learning models and Tmax and ADC thresholding were compared using paired sample Wilcoxon test. Results: 182 patients were included (85 males, age 65±16 yrs, baseline NIHSS 15 IQR 10-19), with a breakdown of minimal/major/partial/unknown reperfusion status of 32/65/43/42 patients, respectively. The pre-training approach performed the best among all approaches (Table 1, Figure 1). Conclusion: Deep learning models to predict penumbra and ischemic core are best trained using general pre-training on a wide range of stroke cases followed by fine-tuning on the extreme cases. This method outperforms conventional DWI-PWI mismatch inspired thresholding approaches.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


Sign in / Sign up

Export Citation Format

Share Document