scholarly journals Deep Learning–based Automatic Lung Segmentation on Multiresolution CT from Healthy and Fibrotic Lungs in Mice

Author(s):  
Francesco Sforazzini ◽  
Patrick Salome ◽  
Mahmoud Moustafa ◽  
Cheng Zhou ◽  
Christian Schwager ◽  
...  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas M. Weng ◽  
Julius F. Heidenreich ◽  
Corona Metz ◽  
Simon Veldhoen ◽  
Thorsten A. Bley ◽  
...  

Abstract Background Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. Methods In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. Results The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. Conclusions Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2025
Author(s):  
Jasjit S. Suri ◽  
Sushant Agarwal ◽  
Pranav Elavarthi ◽  
Rajesh Pathak ◽  
Vedmanvitha Ketireddy ◽  
...  

Background: For COVID-19 lung severity, segmentation of lungs on computed tomography (CT) is the first crucial step. Current deep learning (DL)-based Artificial Intelligence (AI) models have a bias in the training stage of segmentation because only one set of ground truth (GT) annotations are evaluated. We propose a robust and stable inter-variability analysis of CT lung segmentation in COVID-19 to avoid the effect of bias. Methodology: The proposed inter-variability study consists of two GT tracers for lung segmentation on chest CT. Three AI models, PSP Net, VGG-SegNet, and ResNet-SegNet, were trained using GT annotations. We hypothesized that if AI models are trained on the GT tracings from multiple experience levels, and if the AI performance on the test data between these AI models is within the 5% range, one can consider such an AI model robust and unbiased. The K5 protocol (training to testing: 80%:20%) was adapted. Ten kinds of metrics were used for performance evaluation. Results: The database consisted of 5000 CT chest images from 72 COVID-19-infected patients. By computing the coefficient of correlations (CC) between the output of the two AI models trained corresponding to the two GT tracers, computing their differences in their CC, and repeating the process for all three AI-models, we show the differences as 0%, 0.51%, and 2.04% (all < 5%), thereby validating the hypothesis. The performance was comparable; however, it had the following order: ResNet-SegNet > PSP Net > VGG-SegNet. Conclusions: The AI models were clinically robust and stable during the inter-variability analysis on the CT lung segmentation on COVID-19 patients.


Author(s):  
Yuki Kitahara ◽  
Rie Tanaka ◽  
Holger Roth ◽  
Hirohisa Oda ◽  
Kensaku Mori ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6711
Author(s):  
Luís Fabrício de Freitas Souza ◽  
Iágson Carlos Lima Silva ◽  
Adriell Gomes Marques ◽  
Francisco Hércules dos S. Silva ◽  
Virgínia Xavier Nunes ◽  
...  

Several pathologies have a direct impact on society, causing public health problems. Pulmonary diseases such as Chronic obstructive pulmonary disease (COPD) are already the third leading cause of death in the world, leaving tuberculosis at ninth with 1.7 million deaths and over 10.4 million new occurrences. The detection of lung regions in images is a classic medical challenge. Studies show that computational methods contribute significantly to the medical diagnosis of lung pathologies by Computerized Tomography (CT), as well as through Internet of Things (IoT) methods based in the context on the health of things. The present work proposes a new model based on IoT for classification and segmentation of pulmonary CT images, applying the transfer learning technique in deep learning methods combined with Parzen’s probability density. The proposed model uses an Application Programming Interface (API) based on the Internet of Medical Things to classify lung images. The approach was very effective, with results above 98% accuracy for classification in pulmonary images. Then the model proceeds to the lung segmentation stage using the Mask R-CNN network to create a pulmonary map and use fine-tuning to find the pulmonary borders on the CT image. The experiment was a success, the proposed method performed better than other works in the literature, reaching high segmentation metrics values such as accuracy of 98.34%. Besides reaching 5.43 s in segmentation time and overcoming other transfer learning models, our methodology stands out among the others because it is fully automatic. The proposed approach has simplified the segmentation process using transfer learning. It has introduced a faster and more effective method for better-performing lung segmentation, making our model fully automatic and robust.


2021 ◽  
Author(s):  
Young-Gon Kim ◽  
Kyungsang Kim ◽  
Dufan Wu ◽  
Hui Ren ◽  
Won Young Tak ◽  
...  

Abstract Imaging plays an important role in assessing severity of COVID-19 pneumonia. The recent COVID-19 researches indicate that in many cases, disease progress propagates from the bottom of the lungs to the top. However, semantic interpretation of chest radiography (CXR) findings do not provide a quantitative description of radiographic opacities, and the existing AI-assisted CXR image analysis frameworks do not quantify the severity regionally. To address this issue, we propose a deep learning-based four-region lung segmentation method to assist accurate quantification of COVID-19 pneumonia. Specifically, a segmentation model to separate left and right lung is firstly applied, and then a carina and left hilum detection network is used to separate the upper and lower lungs. To improve the segmentation performance of COVID-19 images, ensemble strategy with five models is exploited. For each region, we evaluated the clinical relevance of the proposed method compared with the Radiographic Assessment of the Quality of Lung Edema (RALE). The proposed ensemble strategy showed dice score of 0.900, which outperforms the conventional methods. Mean intensities of segmented four regions indicate positive correlation to the extent and density scores of pulmonary opacities based on the RALE framework. Therefore, the proposed method can accurately segment four-regions of the lungs and quantify regional pulmonary opacities of COVID-19 pneumonia patients.


Sign in / Sign up

Export Citation Format

Share Document