scholarly journals COVLIAS 1.0: Lung Segmentation in COVID-19 Computed Tomography Scans Using Hybrid Deep Learning Artificial Intelligence Models

Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1405
Author(s):  
Jasjit S. Suri ◽  
Sushant Agarwal ◽  
Rajesh Pathak ◽  
Vedmanvitha Ketireddy ◽  
Marta Columbu ◽  
...  

Background: COVID-19 lung segmentation using Computed Tomography (CT) scans is important for the diagnosis of lung severity. The process of automated lung segmentation is challenging due to (a) CT radiation dosage and (b) ground-glass opacities caused by COVID-19. The lung segmentation methodologies proposed in 2020 were semi- or automated but not reliable, accurate, and user-friendly. The proposed study presents a COVID Lung Image Analysis System (COVLIAS 1.0, AtheroPoint™, Roseville, CA, USA) consisting of hybrid deep learning (HDL) models for lung segmentation. Methodology: The COVLIAS 1.0 consists of three methods based on solo deep learning (SDL) or hybrid deep learning (HDL). SegNet is proposed in the SDL category while VGG-SegNet and ResNet-SegNet are designed under the HDL paradigm. The three proposed AI approaches were benchmarked against the National Institute of Health (NIH)-based conventional segmentation model using fuzzy-connectedness. A cross-validation protocol with a 40:60 ratio between training and testing was designed, with 10% validation data. The ground truth (GT) was manually traced by a radiologist trained personnel. For performance evaluation, nine different criteria were selected to perform the evaluation of SDL or HDL lung segmentation regions and lungs long axis against GT. Results: Using the database of 5000 chest CT images (from 72 patients), COVLIAS 1.0 yielded AUC of ~0.96, ~0.97, ~0.98, and ~0.96 (p-value < 0.001), respectively within 5% range of GT area, for SegNet, VGG-SegNet, ResNet-SegNet, and NIH. The mean Figure of Merit using four models (left and right lung) was above 94%. On benchmarking against the National Institute of Health (NIH) segmentation method, the proposed model demonstrated a 58% and 44% improvement in ResNet-SegNet, 52% and 36% improvement in VGG-SegNet for lung area, and lung long axis, respectively. The PE statistics performance was in the following order: ResNet-SegNet > VGG-SegNet > NIH > SegNet. The HDL runs in <1 s on test data per image. Conclusions: The COVLIAS 1.0 system can be applied in real-time for radiology-based clinical settings.

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3679
Author(s):  
Lisardo Prieto González ◽  
Susana Sanz Sánchez ◽  
Javier Garcia-Guzman ◽  
María Jesús L. Boada ◽  
Beatriz L. Boada

Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving scenarios. Sideslip and roll angles are critical parameters in vehicular lateral stability. The later has a high impact on vehicles with an elevated center of gravity, such as trucks, buses, and industrial vehicles, among others, as they are prone to rollover. Due to the high cost of the current sensors used to measure these angles directly, much of the research is focused on estimating them. One of the drawbacks is that vehicles are strong non-linear systems that require specific methods able to tackle this feature. The evolution in Artificial Intelligence models, such as the complex Artificial Neural Network architectures that compose the Deep Learning paradigm, has shown to provide excellent performance for complex and non-linear control problems. In this paper, the authors propose an inexpensive but powerful model based on Deep Learning to estimate the roll and sideslip angles simultaneously in mass production vehicles. The model uses input signals which can be obtained directly from onboard vehicle sensors such as the longitudinal and lateral accelerations, steering angle and roll and yaw rates. The model was trained using hundreds of thousands of data provided by Trucksim® and validated using data captured from real driving maneuvers using a calibrated ground truth device such as VBOX3i dual-antenna GPS from Racelogic®. The use of both Trucksim® software and the VBOX measuring equipment is recognized and widely used in the automotive sector, providing robust data for the research shown in this article.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2025
Author(s):  
Jasjit S. Suri ◽  
Sushant Agarwal ◽  
Pranav Elavarthi ◽  
Rajesh Pathak ◽  
Vedmanvitha Ketireddy ◽  
...  

Background: For COVID-19 lung severity, segmentation of lungs on computed tomography (CT) is the first crucial step. Current deep learning (DL)-based Artificial Intelligence (AI) models have a bias in the training stage of segmentation because only one set of ground truth (GT) annotations are evaluated. We propose a robust and stable inter-variability analysis of CT lung segmentation in COVID-19 to avoid the effect of bias. Methodology: The proposed inter-variability study consists of two GT tracers for lung segmentation on chest CT. Three AI models, PSP Net, VGG-SegNet, and ResNet-SegNet, were trained using GT annotations. We hypothesized that if AI models are trained on the GT tracings from multiple experience levels, and if the AI performance on the test data between these AI models is within the 5% range, one can consider such an AI model robust and unbiased. The K5 protocol (training to testing: 80%:20%) was adapted. Ten kinds of metrics were used for performance evaluation. Results: The database consisted of 5000 CT chest images from 72 COVID-19-infected patients. By computing the coefficient of correlations (CC) between the output of the two AI models trained corresponding to the two GT tracers, computing their differences in their CC, and repeating the process for all three AI-models, we show the differences as 0%, 0.51%, and 2.04% (all < 5%), thereby validating the hypothesis. The performance was comparable; however, it had the following order: ResNet-SegNet > PSP Net > VGG-SegNet. Conclusions: The AI models were clinically robust and stable during the inter-variability analysis on the CT lung segmentation on COVID-19 patients.


Diagnostics ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2367
Author(s):  
Jasjit S. Suri ◽  
Sushant Agarwal ◽  
Alessandro Carriero ◽  
Alessio Paschè ◽  
Pietro S. C. Danna ◽  
...  

(1) Background: COVID-19 computed tomography (CT) lung segmentation is critical for COVID lung severity diagnosis. Earlier proposed approaches during 2020–2021 were semiautomated or automated but not accurate, user-friendly, and industry-standard benchmarked. The proposed study compared the COVID Lung Image Analysis System, COVLIAS 1.0 (GBTI, Inc., and AtheroPointTM, Roseville, CA, USA, referred to as COVLIAS), against MedSeg, a web-based Artificial Intelligence (AI) segmentation tool, where COVLIAS uses hybrid deep learning (HDL) models for CT lung segmentation. (2) Materials and Methods: The proposed study used 5000 ITALIAN COVID-19 positive CT lung images collected from 72 patients (experimental data) that confirmed the reverse transcription-polymerase chain reaction (RT-PCR) test. Two hybrid AI models from the COVLIAS system, namely, VGG-SegNet (HDL 1) and ResNet-SegNet (HDL 2), were used to segment the CT lungs. As part of the results, we compared both COVLIAS and MedSeg against two manual delineations (MD 1 and MD 2) using (i) Bland–Altman plots, (ii) Correlation coefficient (CC) plots, (iii) Receiver operating characteristic curve, and (iv) Figure of Merit and (v) visual overlays. A cohort of 500 CROATIA COVID-19 positive CT lung images (validation data) was used. A previously trained COVLIAS model was directly applied to the validation data (as part of Unseen-AI) to segment the CT lungs and compare them against MedSeg. (3) Result: For the experimental data, the four CCs between COVLIAS (HDL 1) vs. MD 1, COVLIAS (HDL 1) vs. MD 2, COVLIAS (HDL 2) vs. MD 1, and COVLIAS (HDL 2) vs. MD 2 were 0.96, 0.96, 0.96, and 0.96, respectively. The mean value of the COVLIAS system for the above four readings was 0.96. CC between MedSeg vs. MD 1 and MedSeg vs. MD 2 was 0.98 and 0.98, respectively. Both had a mean value of 0.98. On the validation data, the CC between COVLIAS (HDL 1) vs. MedSeg and COVLIAS (HDL 2) vs. MedSeg was 0.98 and 0.99, respectively. For the experimental data, the difference between the mean values for COVLIAS and MedSeg showed a difference of <2.5%, meeting the standard of equivalence. The average running times for COVLIAS and MedSeg on a single lung CT slice were ~4 s and ~10 s, respectively. (4) Conclusions: The performances of COVLIAS and MedSeg were similar. However, COVLIAS showed improved computing time over MedSeg.


Medicina ◽  
2021 ◽  
Vol 57 (11) ◽  
pp. 1148
Author(s):  
Marie Takahashi ◽  
Tomoyuki Fujioka ◽  
Toshihiro Horii ◽  
Koichiro Kimura ◽  
Mizuki Kimura ◽  
...  

Background and Objectives: This study aimed to investigate whether predictive indicators for the deterioration of respiratory status can be derived from the deep learning data analysis of initial chest computed tomography (CT) scans of patients with coronavirus disease 2019 (COVID-19). Materials and Methods: Out of 117 CT scans of 75 patients with COVID-19 admitted to our hospital between April and June 2020, we retrospectively analyzed 79 CT scans that had a definite time of onset and were performed prior to any medication intervention. Patients were grouped according to the presence or absence of increased oxygen demand after CT scan. Quantitative volume data of lung opacity were measured automatically using a deep learning-based image analysis system. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of the opacity volume data were calculated to evaluate the accuracy of the system in predicting the deterioration of respiratory status. Results: All 79 CT scans were included (median age, 62 years (interquartile range, 46–77 years); 56 (70.9%) were male. The volume of opacity was significantly higher for the increased oxygen demand group than for the nonincreased oxygen demand group (585.3 vs. 132.8 mL, p < 0.001). The sensitivity, specificity, and AUC were 76.5%, 68.2%, and 0.737, respectively, in the prediction of increased oxygen demand. Conclusion: Deep learning-based quantitative analysis of the affected lung volume in the initial CT scans of patients with COVID-19 can predict the deterioration of respiratory status to improve treatment and resource management.


2020 ◽  
Author(s):  
Qingli Dou ◽  
Jiangping Liu ◽  
Wenwu Zhang ◽  
Yanan Gu ◽  
Wan-Ting Hsu ◽  
...  

ABSTRACTBackgroundCharacteristic chest computed tomography (CT) manifestation of 2019 novel coronavirus (COVID-19) was added as a diagnostic criterion in the Chinese National COVID-19 management guideline. Whether the characteristic findings of Chest CT could differentiate confirmed COVID-19 cases from other positive nucleic acid test (NAT)-negative patients has not been rigorously evaluated.PurposeWe aim to test whether chest computed tomography (CT) manifestation of 2019 novel coronavirus (COVID-19) can be differentiated by a radiologist or a computer-based CT image analysis system.MethodsWe conducted a retrospective case-control study that included 52 laboratory-confirmed COVID-19 patients and 80 non-COVID-19 viral pneumonia patients between 20 December, 2019 and 10 February, 2020. The chest CT images were evaluated by radiologists in a double blind fashion. A computer-based image analysis system (uAI system, Lianying Inc., Shanghai, China) detected the lesions in 18 lung segments defined by Boyden classification system and calculated the infected volume in each segment. The number and volume of lesions detected by radiologist and computer system was compared with Chi-square test or Mann-Whitney U test as appropriate.ResultsThe main CT manifestations of COVID-19 were multi-lobar/segmental peripheral ground-glass opacities and patchy air space infiltrates. The case and control groups were similar in demographics, comorbidity, and clinical manifestations. There was no significant difference in eight radiologist identified CT image features between the two groups of patients. There was also no difference in the absolute and relative volume of infected regions in each lung segment.ConclusionsWe documented the non-differentiating nature of initial chest CT image between COVID-19 and other viral pneumonia with suspected symptoms. Our results do not support CT findings replacing microbiological diagnosis as a critical criterion for COVID-19 diagnosis. Our findings may prompt re-evaluation of isolated patients without laboratory confirmation.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Patricio Astudillo ◽  
Peter Mortier ◽  
Johan Bosmans ◽  
Ole De Backer ◽  
Peter de Jaegere ◽  
...  

Anatomic landmark detection is crucial during preoperative planning of transcatheter aortic valve implantation (TAVI) to select the proper device size and assess the risk of complications. The detection is currently a time-consuming manual process influenced by the image quality and subject to operator variability. In this work, we propose a novel automatic method to detect the relevant aortic landmarks from MDCT images using deep learning techniques. We trained three convolutional neural networks (CNNs) with 344 multidetector computed tomography (MDCT) acquisitions to detect five anatomical landmarks relevant for TAVI planning: the three basal attachment points of the aortic valve leaflets and the left and right coronary ostia. The detection strategy used these three CNN models to analyse a single MDCT image and yield three segmentation volumes as output. These segmentation volumes were averaged into one final segmentation volume, and the final predicted landmarks were obtained during a postprocessing step. Finally, we constructed the aortic annular plane, defined by the three predicted hinge points, and measured the distances from this plane to the predicted coronary ostia (i.e., coronary height). The methodology was validated on 100 patients. The automatic landmark detection was able to detect all the landmarks and showed high accuracy as the median distance between the ground truth and predictions is lower than the interobserver variations (1.5 mm [1.1–2.1], 2.0 mm [1.3–2.8] with a paired difference −0.5 ± 1.3 mm and p value <0.001). Furthermore, a high correlation is observed between predicted and manually measured coronary heights (for both R2 = 0.8). The image analysis time per patient was below one second. The proposed method is accurate, fast, and reproducible. Embedding this tool based on deep learning in the preoperative planning routine may have an impact in the TAVI environments by reducing the time and cost and improving accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yu-Cheng Yeh ◽  
Chi-Hung Weng ◽  
Yu-Jui Huang ◽  
Chen-Ju Fu ◽  
Tsung-Ting Tsai ◽  
...  

AbstractHuman spinal balance assessment relies considerably on sagittal radiographic parameter measurement. Deep learning could be applied for automatic landmark detection and alignment analysis, with mild to moderate standard errors and favourable correlations with manual measurement. In this study, based on 2210 annotated images of various spinal disease aetiologies, we developed deep learning models capable of automatically locating 45 anatomic landmarks and subsequently generating 18 radiographic parameters on a whole-spine lateral radiograph. In the assessment of model performance, the localisation accuracy and learning speed were the highest for landmarks in the cervical area, followed by those in the lumbosacral, thoracic, and femoral areas. All the predicted radiographic parameters were significantly correlated with ground truth values (all p < 0.001). The human and artificial intelligence comparison revealed that the deep learning model was capable of matching the reliability of doctors for 15/18 of the parameters. The proposed automatic alignment analysis system was able to localise spinal anatomic landmarks with high accuracy and to generate various radiographic parameters with favourable correlations with manual measurements.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Mangor Pedersen ◽  
Karin Verspoor ◽  
Mark Jenkinson ◽  
Meng Law ◽  
David F Abbott ◽  
...  

Abstract Artificial intelligence is one of the most exciting methodological shifts in our era. It holds the potential to transform healthcare as we know it, to a system where humans and machines work together to provide better treatment for our patients. It is now clear that cutting edge artificial intelligence models in conjunction with high-quality clinical data will lead to improved prognostic and diagnostic models in neurological disease, facilitating expert-level clinical decision tools across healthcare settings. Despite the clinical promise of artificial intelligence, machine and deep-learning algorithms are not a one-size-fits-all solution for all types of clinical data and questions. In this article, we provide an overview of the core concepts of artificial intelligence, particularly contemporary deep-learning methods, to give clinician and neuroscience researchers an appreciation of how artificial intelligence can be harnessed to support clinical decisions. We clarify and emphasize the data quality and the human expertise needed to build robust clinical artificial intelligence models in neurology. As artificial intelligence is a rapidly evolving field, we take the opportunity to iterate important ethical principles to guide the field of medicine is it moves into an artificial intelligence enhanced future.


2020 ◽  
Author(s):  
Deniz Alis ◽  
Mert Yergin ◽  
Ceren Alis ◽  
Cagdas Topel ◽  
Ozan Asmakutlu ◽  
...  

Abstract There is little evidence on the applicability of deep learning (DL) in segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) between magnetic resonance imaging (MRI) scanners of different manufacturers. We retrospectively included DWI data of patients with acute ischemic lesions from six centers. Dataset A (n=2986) and B (n=3951) included data from Siemens and GE MRI scanners, respectively. The datasets were split into the training (80%), validation (10%), and internal test (10%) sets and six neuroradiologist created ground-truth masks. Models A and B were the proposed neural networks trained on datasets A and B and also fine-tuned across the datasets using their validation data. Another radiologist performed the segmentation on the test sets for comparisons. The median Dice scores of models A and B were 0.858 and 0.857 for the internal tests, which were non-inferior to the radiologist’s performance, but demonstrated lower performance than the radiologist on the external tests. Fine-tuned models A and B achieved median Dice scores of 0.832 and 0.846, which were non-inferior to the radiologist's performance on the external tests. The present work shows that inter-vendor operability of deep learning for the segmentation of ischemic lesions on DWI might be enhanced via transfer learning; thereby, their clinical applicability and generalizability could be improved.


Sign in / Sign up

Export Citation Format

Share Document