scholarly journals Deep learning-based segmentation of the lung in MR-images acquired by a stack-of-spirals trajectory at ultra-short echo-times

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas M. Weng ◽  
Julius F. Heidenreich ◽  
Corona Metz ◽  
Simon Veldhoen ◽  
Thorsten A. Bley ◽  
...  

Abstract Background Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. Methods In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. Results The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. Conclusions Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.

2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


2020 ◽  
Vol 6 (11) ◽  
pp. 125 ◽  
Author(s):  
Albert Comelli ◽  
Claudia Coronnello ◽  
Navdeep Dahiya ◽  
Viviana Benfante ◽  
Stefano Palmucci ◽  
...  

Background: The aim of this work is to identify an automatic, accurate, and fast deep learning segmentation approach, applied to the parenchyma, using a very small dataset of high-resolution computed tomography images of patients with idiopathic pulmonary fibrosis. In this way, we aim to enhance the methodology performed by healthcare operators in radiomics studies where operator-independent segmentation methods must be used to correctly identify the target and, consequently, the texture-based prediction model. Methods: Two deep learning models were investigated: (i) U-Net, already used in many biomedical image segmentation tasks, and (ii) E-Net, used for image segmentation tasks in self-driving cars, where hardware availability is limited and accurate segmentation is critical for user safety. Our small image dataset is composed of 42 studies of patients with idiopathic pulmonary fibrosis, of which only 32 were used for the training phase. We compared the performance of the two models in terms of the similarity of their segmentation outcome with the gold standard and in terms of their resources’ requirements. Results: E-Net can be used to obtain accurate (dice similarity coefficient = 95.90%), fast (20.32 s), and clinically acceptable segmentation of the lung region. Conclusions: We demonstrated that deep learning models can be efficiently applied to rapidly segment and quantify the parenchyma of patients with pulmonary fibrosis, without any radiologist supervision, in order to produce user-independent results.


2021 ◽  
Vol 11 ◽  
Author(s):  
Xi Liu ◽  
Kai-Wen Li ◽  
Ruijie Yang ◽  
Li-Sheng Geng

Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets—the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.


2021 ◽  
Vol 10 (19) ◽  
pp. 4431
Author(s):  
Chung-Yi Yang ◽  
Yi-Ju Pan ◽  
Yen Chou ◽  
Chia-Jung Yang ◽  
Ching-Chung Kao ◽  
...  

Background: The performance of chest radiography-based age and sex prediction has not been well validated. We used a deep learning model to predict the age and sex of healthy adults based on chest radiographs (CXRs). Methods: In this retrospective study, 66,643 CXRs of 47,060 healthy adults were used for model training and testing. In total, 47,060 individuals (mean age ± standard deviation, 38·7 ± 11·9 years; 22,144 males) were included. By using chronological ages as references, mean absolute error (MAE), root mean square error (RMSE), and Pearson’s correlation coefficient were used to assess the model performance. Summarized class activation maps were used to highlight the activated anatomical regions. The area under the curve (AUC) was used to examine the validity for sex prediction. Results: When model predictions were compared with the chronological ages, the MAE was 2·1 years, RMSE was 2·8 years, and Pearson’s correlation coefficient was 0·97 (p < 0·001). Cervical, thoracic spines, first ribs, aortic arch, heart, rib cage, and soft tissue of thorax and flank seemed to be the most crucial activated regions in the age prediction model. The sex prediction model demonstrated an AUC of > 0·99. Conclusion: Deep learning can accurately estimate age and sex based on CXRs.


Forests ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 1366
Author(s):  
Kexin Li ◽  
Xinwang Huang ◽  
Jingzhe Zhang ◽  
Zhihu Sun ◽  
Jianping Huang ◽  
...  

Research Highlights: This paper proposes a new method for hemispherical forest canopy image segmentation. The method is based on a deep learning methodology and provides a robust and fully automatic technique for the segmentation of forest canopy hemispherical photography (CHP) and gap fraction (GF) calculation. Background and Objectives: CHP is widely used to estimate structural forest variables. The GF is the most important parameter for calculating the leaf area index (LAI), and its calculation requires the binary segmentation result of the CHP. Materials and Methods: Our method consists of three modules, namely, northing correction, valid region extraction, and hemispherical image segmentation. In these steps, a core procedure is hemispherical canopy image segmentation based on the U-Net convolutional neural network. Our method is compared with traditional threshold methods (e.g., the Otsu and Ridler methods), a fuzzy clustering method (FCM), commercial professional software (WinSCANOPY), and the Habitat-Net network method. Results: The experimental results show that the method presented here achieves a Dice similarity coefficient (DSC) of 89.20% and an accuracy of 98.73%. Conclusions: The method presented here outperforms the Habitat-Net and WinSCANOPY methods, along with the FCM, and it is significantly better than the Otsu and Ridler threshold methods. The method takes the original canopy hemisphere image first and then automatically executes the three modules in sequence, and finally outputs the binary segmentation map. The method presented here is a pipelined, end-to-end method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kevin H. Leung ◽  
Steven P. Rowe ◽  
Martin G. Pomper ◽  
Yong Du

Abstract Background Diagnosis of Parkinson’s disease (PD) is informed by the presence of progressive motor and non-motor symptoms and by imaging dopamine transporter with [123I]ioflupane (DaTscan). Deep learning and ensemble methods have recently shown promise in medical image analysis. Therefore, this study aimed to develop a three-stage, deep learning, ensemble approach for prognosis in patients with PD. Methods Retrospective data of 198 patients with PD were retrieved from the Parkinson’s Progression Markers Initiative database and randomly partitioned into the training, validation, and test sets with 118, 40, and 40 patients, respectively. The first and second stages of the approach extracted features from DaTscan and clinical measures of motor symptoms, respectively. The third stage trained an ensemble of deep neural networks on different subsets of the extracted features to predict patient outcome 4 years after initial baseline screening. The approach was evaluated by assessing mean absolute percentage error (MAPE), mean absolute error (MAE), Pearson’s correlation coefficient, and bias between the predicted and observed motor outcome scores. The approach was compared to individual networks given different data subsets as inputs. Results The ensemble approach yielded a MAPE of 18.36%, MAE of 4.70, a Pearson’s correlation coefficient of 0.84, and had no significant bias indicating accurate outcome prediction. The approach outperformed individual networks not given DaTscan imaging or clinical measures of motor symptoms as inputs, respectively. Conclusion The approach showed promise for longitudinal prognostication in PD and demonstrated the synergy of imaging and non-imaging information for the prediction task.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xu Fu ◽  
Huaiqin Liu ◽  
Xiaowang Bi ◽  
Xiao Gong

This study focused on the application of deep learning algorithms in the segmentation of CT images, so as to diagnose chronic kidney diseases accurately and quantitatively. First, the residual dual-attention module (RDA module) was used for automatic segmentation of renal cysts in CT images. 79 patients with renal cysts were selected as research subjects, of whom 27 cases were defined as the test group and 52 cases were defined as the training group. The segmentation results of the test group were evaluated factoring into the Dice similarity coefficient (DSC), precision, and recall. The experimental results showed that the loss function value of the RDA-UNET model rapidly decayed and converged, and the segmentation results of the model in the study were roughly the same as those of manual labeling, indicating that the model had high accuracy in image segmentation, and the contour of the kidney can be segmented accurately. Next, the RDA-UNET model achieved 96.25% DSC, 96.34% precision, and 96.88% recall for the left kidney and 94.22% DSC, 95.34% precision, and 94.61% recall for the right kidney, which were better than other algorithms. The results showed that the algorithm model in this study was superior to other algorithms in each evaluation index. It explained the advantages of this model compared with other algorithm models. In conclusion, the RDA-UNET model can effectively improve the accuracy of CT image segmentation, and it is worth of promotion in the quantitative assessment of chronic kidney diseases through CT imaging.


2021 ◽  
Author(s):  
Richard C. Davis ◽  
Xiang Li ◽  
Yuemei Xu ◽  
Zehan Wang ◽  
Nao Souma ◽  
...  

Purpose: Recent advances in computational image analysis offer the opportunity to develop automatic quantification of histologic parameters as aid tools for practicing pathologists. This work aims to develop deep learning (DL) models to quantify non-sclerotic and sclerotic glomeruli on frozen sections from donor kidney biopsies. Approach: A total of 258 whole slide images (WSI) from cadaveric donor kidney biopsies performed at our institution (n=123) and at external institutions (n=135) were used in this study. WSIs from our institution were divided at the patient level into training and validation datasets (Ratio: 0.8:0.2) and external WSIs were used as an independent testing dataset. Non-sclerotic (n=22767) and sclerotic (n=1366) glomeruli were manually annotated by study pathologists on all WSIs. A 9-layer convolutional neural network based on the common U-Net architecture was developed and tested for the segmentation of non-sclerotic and sclerotic glomeruli. DL-derived, manual segmentation and reported glomerular count (standard of care) were compared. Results: The average Dice Similarity Coefficient testing was 0.90 and 0.83. and the F1, Recall, and Precision scores were 0.93, 0.96, and 0.90, and 0.87, 0.93, and 0.81, for non-sclerotic and sclerotic glomeruli, respectively. DL-derived and manual segmentation derived glomerular counts were comparable, but statistically different from reported glomerular count. Conclusions: DL segmentation is a feasible and robust approach for automatic quantification of glomeruli. This work represents the first step toward new protocols for the evaluation of donor kidney biopsies.


2021 ◽  
Vol 15 ◽  
Author(s):  
Kaisar Kushibar ◽  
Mostafa Salem ◽  
Sergi Valverde ◽  
Àlex Rovira ◽  
Joaquim Salvi ◽  
...  

Segmentation of brain images from Magnetic Resonance Images (MRI) is an indispensable step in clinical practice. Morphological changes of sub-cortical brain structures and quantification of brain lesions are considered biomarkers of neurological and neurodegenerative disorders and used for diagnosis, treatment planning, and monitoring disease progression. In recent years, deep learning methods showed an outstanding performance in medical image segmentation. However, these methods suffer from generalisability problem due to inter-centre and inter-scanner variabilities of the MRI images. The main objective of the study is to develop an automated deep learning segmentation approach that is accurate and robust to the variabilities in scanner and acquisition protocols. In this paper, we propose a transductive transfer learning approach for domain adaptation to reduce the domain-shift effect in brain MRI segmentation. The transductive scenario assumes that there are sets of images from two different domains: (1) source—images with manually annotated labels; and (2) target—images without expert annotations. Then, the network is jointly optimised integrating both source and target images into the transductive training process to segment the regions of interest and to minimise the domain-shift effect. We proposed to use a histogram loss in the feature level to carry out the latter optimisation problem. In order to demonstrate the benefit of the proposed approach, the method has been tested in two different brain MRI image segmentation problems using multi-centre and multi-scanner databases for: (1) sub-cortical brain structure segmentation; and (2) white matter hyperintensities segmentation. The experiments showed that the segmentation performance of a pre-trained model could be significantly improved by up to 10%. For the first segmentation problem it was possible to achieve a maximum improvement from 0.680 to 0.799 in average Dice Similarity Coefficient (DSC) metric and for the second problem the average DSC improved from 0.504 to 0.602. Moreover, the improvements after domain adaptation were on par or showed better performance compared to the commonly used traditional unsupervised segmentation methods (FIRST and LST), also achieving faster execution time. Taking this into account, this work presents one more step toward the practical implementation of deep learning algorithms into the clinical routine.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257371
Author(s):  
Daisuke Nishiyama ◽  
Hiroshi Iwasaki ◽  
Takaya Taniguchi ◽  
Daisuke Fukui ◽  
Manabu Yamanaka ◽  
...  

Accurate gluteus medius (GMd) volume evaluation may aid in the analysis of muscular atrophy states and help gain an improved understanding of patient recovery via rehabilitation. However, the segmentation of muscle regions in GMd images for cubic muscle volume assessment is time-consuming and labor-intensive. This study automated GMd-region segmentation from the computed tomography (CT) images of patients diagnosed with hip osteoarthritis using deep learning and evaluated the segmentation accuracy. To this end, 5250 augmented pairs of training data were obtained from five participants, and a conditional generative adversarial network was used to identify the relationships between the image pairs. Using the preserved test datasets, the results of automatic segmentation with the trained deep learning model were compared to those of manual segmentation in terms of the dice similarity coefficient (DSC), volume similarity (VS), and shape similarity (MS). As observed, the average DSC values for automatic and manual segmentations were 0.748 and 0.812, respectively, with a significant difference (p < 0.0001); the average VS values were 0.247 and 0.203, respectively, with no significant difference (p = 0.069); and the average MS values were 1.394 and 1.156, respectively, with no significant difference (p = 0.308). The GMd volumes obtained by automatic and manual segmentation were 246.2 cm3 and 282.9 cm3, respectively. The noninferiority of the DSC obtained by automatic segmentation was verified against that obtained by manual segmentation. Accordingly, the proposed GAN-based automatic GMd-segmentation technique is confirmed to be noninferior to manual segmentation. Therefore, the findings of this research confirm that the proposed method not only reduces time and effort but also facilitates accurate assessment of the cubic muscle volume.


Sign in / Sign up

Export Citation Format

Share Document