scholarly journals Convolutional neural networks for PET functional volume fully automatic segmentation: development and validation in a multi-center setting

Author(s):  
Andrei Iantsen ◽  
Marta Ferreira ◽  
Francois Lucia ◽  
Vincent Jaouen ◽  
Caroline Reinhold ◽  
...  

Abstract Purpose In this work, we addressed fully automatic determination of tumor functional uptake from positron emission tomography (PET) images without relying on other image modalities or additional prior constraints, in the context of multicenter images with heterogeneous characteristics. Methods In cervical cancer, an additional challenge is the location of the tumor uptake near or even stuck to the bladder. PET datasets of 232 patients from five institutions were exploited. To avoid unreliable manual delineations, the ground truth was generated with a semi-automated approach: a volume containing the tumor and excluding the bladder was first manually determined, then a well-validated, semi-automated approach relying on the Fuzzy locally Adaptive Bayesian (FLAB) algorithm was applied to generate the ground truth. Our model built on the U-Net architecture incorporates residual blocks with concurrent spatial squeeze and excitation modules, as well as learnable non-linear downsampling and upsampling blocks. Experiments relied on cross-validation (four institutions for training and validation, and the fifth for testing). Results The model achieved good Dice similarity coefficient (DSC) with little variability across institutions (0.80 ± 0.03), with higher recall (0.90 ± 0.05) than precision (0.75 ± 0.05) and improved results over the standard U-Net (DSC 0.77 ± 0.05, recall 0.87 ± 0.02, precision 0.74 ± 0.08). Both vastly outperformed a fixed threshold at 40% of SUVmax (DSC 0.33 ± 0.15, recall 0.52 ± 0.17, precision 0.30 ± 0.16). In all cases, the model could determine the tumor uptake without including the bladder. Neither shape priors nor anatomical information was required to achieve efficient training. Conclusion The proposed method could facilitate the deployment of a fully automated radiomics pipeline in such a challenging multicenter context.

2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Chen Huang ◽  
Junru Tian ◽  
Chenglang Yuan ◽  
Ping Zeng ◽  
Xueping He ◽  
...  

Objective. Deep vein thrombosis (DVT) is a disease caused by abnormal blood clots in deep veins. Accurate segmentation of DVT is important to facilitate the diagnosis and treatment. In the current study, we proposed a fully automatic method of DVT delineation based on deep learning (DL) and contrast enhanced magnetic resonance imaging (CE-MRI) images. Methods. 58 patients (25 males; 28~96 years old) with newly diagnosed lower extremity DVT were recruited. CE-MRI was acquired on a 1.5 T system. The ground truth (GT) of DVT lesions was manually contoured. A DL network with an encoder-decoder architecture was designed for DVT segmentation. 8-Fold cross-validation strategy was applied for training and testing. Dice similarity coefficient (DSC) was adopted to evaluate the network’s performance. Results. It took about 1.5s for our CNN model to perform the segmentation task in a slice of MRI image. The mean DSC of 58 patients was 0.74± 0.17 and the median DSC was 0.79. Compared with other DL models, our CNN model achieved better performance in DVT segmentation (0.74± 0.17 versus 0.66±0.15, 0.55±0.20, and 0.57±0.22). Conclusion. Our proposed DL method was effective and fast for fully automatic segmentation of lower extremity DVT.


2021 ◽  
Author(s):  
Gauthier Dot ◽  
Thomas Schouman ◽  
Guillaume Dubois ◽  
Philippe Rouch ◽  
Laurent Gajny

Objectives To evaluate the performance of the nnU-Net open-source deep learning framework for automatic multi-task segmentation of craniomaxillofacial (CMF) structures in CT scans obtained for computer-assisted orthognathic surgery. Methods Four hundred and fifty-three consecutive patients having undergone high-definition CT scans before orthognathic surgery were randomly distributed among a training/validation cohort (n = 300) and a testing cohort (n = 153). The ground truth segmentations were generated by 2 operators following an industry-certified procedure for use in computer-assisted surgical planning and personalized implant manufacturing. Model performance was assessed by comparing model predictions with ground truth segmentations. Examination of 45 CT scans by an industry expert provided additional evaluation. The model's generalizability was tested on a publicly available dataset of 10 CT scans with ground truth segmentations of the mandible. Results In the test cohort, mean volumetric Dice Similarity Coefficient (vDSC) & surface Dice Similarity Coefficient at 1mm (sDSC) were 0.96 & 0.97 for the upper skull, 0.94 & 0.98 for the mandible, 0.95 & 0.99 for the upper teeth, 0.94 & 0.99 for the lower teeth and 0.82 & 0.98 for the mandibular canal. Industry expert segmentation approval rates were 93% for the mandible, 89% for the mandibular canal, 82% for the upper skull, 69% for the upper teeth and 58% for the lower teeth. Conclusion While additional efforts are required for the segmentation of dental apices, our results demonstrated the model's reliability in terms of fully automatic segmentation of preoperative orthognathic CT scans.


2021 ◽  
Vol 22 (Supplement_2) ◽  
Author(s):  
S Alabed ◽  
K Karunasaagarar ◽  
F Alandejani ◽  
P Garg ◽  
J Uthoff ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Foundation. Main funding source(s): Wellcome Trust (UK), NIHR (UK) Introduction Cardiac magnetic resonance (CMR) measurements have significant diagnostic and prognostic value. Accurate and repeatable measurements are essential to assess disease severity, evaluate therapy response and monitor disease progression. Deep learning approaches have shown promise for automatic left ventricular (LV) segmentation on CMR, however fully automatic right ventricular (RV) segmentation remains challenging. We aimed to develop a biventricular automatic contouring model and evaluate the interstudy repeatability of the model in a prospectively recruited cohort. Methods A deep learning CMR contouring model was developed in a retrospective multi-vendor (Siemens and General Electric), multi-pathology cohort of patients, predominantly with heart failure, pulmonary hypertension and lung diseases (n = 400, ASPIRE registry). Biventricular segmentations were made on all CMR studies across cardiac phases. To test the accuracy of the automatic segmentation, 30 ASPIRE CMRs were segmented independently by two CMR experts. Each segmentation was compared to the automatic contouring with agreement assessed using the Dice similarity coefficient (DSC).  A prospective validation cohort of 46 subjects (10 healthy volunteers and 36 patients with pulmonary hypertension) were recruited to assess interstudy agreement of automatic and manual CMR assessments. Two CMR studies were performed during separate sessions on the same day. Interstudy repeatability was assessed using intraclass correlation coefficient (ICC) and Bland-Altman plots.  Results DSC showed high agreement (figure 1) comparing automatic and expert CMR readers, with minimal bias towards either CMR expert. The scan-scan repeatability CMR measurements were higher for all automatic RV measurements (ICC 0.89 to 0.98) compared to manual RV measurements (0.78 to 0.98). LV automatic and manual measurements were similarly repeatable (figure 2). Bland-Altman plots showed strong agreement with small mean differences between the scan-scan measurements (figure 2). Conclusion Fully automatic biventricular short-axis segmentations are comparable with expert manual segmentations, and have shown excellent interstudy repeatability.


2020 ◽  
Vol 17 (169) ◽  
pp. 20200267
Author(s):  
Arghavan Arafati ◽  
Daisuke Morisawa ◽  
Michael R. Avendi ◽  
M. Reza Amini ◽  
Ramin A. Assadi ◽  
...  

A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers’ reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data.


2020 ◽  
Vol 62 (12) ◽  
pp. 1637-1648
Author(s):  
Karin Gau ◽  
Charlotte S. M. Schmidt ◽  
Horst Urbach ◽  
Josef Zentner ◽  
Andreas Schulze-Bonhage ◽  
...  

Abstract Purpose Precise segmentation of brain lesions is essential for neurological research. Specifically, resection volume estimates can aid in the assessment of residual postoperative tissue, e.g. following surgery for glioma. Furthermore, behavioral lesion-symptom mapping in epilepsy relies on accurate delineation of surgical lesions. We sought to determine whether semi- and fully automatic segmentation methods can be applied to resected brain areas and which approach provides the most accurate and cost-efficient results. Methods We compared a semi-automatic (ITK-SNAP) with a fully automatic (lesion_GNB) method for segmentation of resected brain areas in terms of accuracy with manual segmentation serving as reference. Additionally, we evaluated processing times of all three methods. We used T1w, MRI-data of epilepsy patients (n = 27; 11 m; mean age 39 years, range 16–69) who underwent temporal lobe resections (17 left). Results The semi-automatic approach yielded superior accuracy (p < 0.001) with a median Dice similarity coefficient (mDSC) of 0.78 and a median average Hausdorff distance (maHD) of 0.44 compared with the fully automatic approach (mDSC 0.58, maHD 1.32). There was no significant difference between the median percent volume difference of the two approaches (p > 0.05). Manual segmentation required more human input (30.41 min/subject) and therefore inferring significantly higher costs than semi- (3.27 min/subject) or fully automatic approaches (labor and cost approaching zero). Conclusion Semi-automatic segmentation offers the most accurate results in resected brain areas with a moderate amount of human input, thus representing a viable alternative compared with manual segmentation, especially for studies with large patient cohorts.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Daniel Markel ◽  
Curtis Caldwell ◽  
Hamideh Alasti ◽  
Hany Soliman ◽  
Yee Ung ◽  
...  

Target definition is the largest source of geometric uncertainty in radiation therapy. This is partly due to a lack of contrast between tumor and healthy soft tissue for computed tomography (CT) and due to blurriness, lower spatial resolution, and lack of a truly quantitative unit for positron emission tomography (PET). First-, second-, and higher-order statistics, Tamura, and structural features were characterized for PET and CT images of lung carcinoma and organs of the thorax. A combined decision tree (DT) with K-nearest neighbours (KNN) classifiers as nodes containing combinations of 3 features were trained and used for segmentation of the gross tumor volume. This approach was validated for 31 patients from two separate institutions and scanners. The results were compared with thresholding approaches, the fuzzy clustering method, the 3-level fuzzy locally adaptive Bayesian algorithm, the multivalued level set algorithm, and a single KNN using Hounsfield units and standard uptake value. The results showed the DTKNN classifier had the highest sensitivity of 73.9%, second highest average Dice coefficient of 0.607, and a specificity of 99.2% for classifying voxels when using a probabilistic ground truth provided by simultaneous truth and performance level estimation using contours drawn by 3 trained physicians.


Author(s):  
Alexandr N. Korabelnikov ◽  
◽  
Alexandr V. Kolsanov ◽  
Sergey S. Chaplygin ◽  
Pavel M. Zelter ◽  
...  

Anatomical structure segmentation on computed tomography (CT) is the key stage in medical visualization and computer diagnosis. Tumors are one of types of internal structures, for which the problem of automatic segmentation today has no solution fully satisfying by quality. The reason is high variance of tumor’s density and inability of using a priori anatomical information about shape. In this paper we propose automatic method of liver tumors segmentation based on convolution neural nets (CNN). Studying and validation have been performed on set of CT with liver and tumors segmentation ground truth. Average error (VOE) by cross-validation is 17.3%. Also there were considered algorithms of pre- and post-processing which increase accuracy and performance of segmentation procedure. Particularly the acceleration of the segmentation procedure with negligible decrease of quality has been reached 6 times.


2021 ◽  
Vol 10 (15) ◽  
pp. 3347
Author(s):  
Fabien Lareyre ◽  
Cédric Adam ◽  
Marion Carrier ◽  
Juliette Raffort

Background: Computed tomography angiography (CTA) is one of the most commonly used imaging technique for the management of vascular diseases. Here, we aimed to develop a hybrid method combining a feature-based expert system with a supervised deep learning (DL) algorithm to enable a fully automatic segmentation of the abdominal vascular tree. Methods: We proposed an algorithm based on the hybridization of a data-driven convolutional neural network and a knowledge-based model dedicated to vascular system segmentation. By using two distinct datasets of CTA from patients to evaluate independence to training dataset, the accuracy of the hybrid method for lumen and thrombus segmentation was evaluated compared to the feature-based expert system alone and to the ground truth provided by a human expert. Results: The hybrid approach demonstrated a better accuracy for lumen segmentation compared to the expert system alone (volume similarity: 0.8128 vs. 0.7912, p = 0.0006 and Dice similarity coefficient: 0.8266 vs. 0.7942, p < 0.0001). The accuracy for thrombus segmentation was also enhanced using the hybrid approach (volume similarity: 0.9404 vs. 0.9185, p = 0.0027 and Dice similarity coefficient: 0.8918 vs. 0.8654, p < 0.0001). Conclusions: By enabling a robust and fully automatic segmentation, the method could be used to develop real-time decision support to help in the management of vascular diseases.


2021 ◽  
Vol 51 (2) ◽  
pp. E14
Author(s):  
Tim Fick ◽  
Jesse A. M. van Doormaal ◽  
Lazar Tosic ◽  
Renate J. van Zoest ◽  
Jene W. Meulstee ◽  
...  

OBJECTIVE For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set. METHODS Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95). RESULTS The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). CONCLUSIONS The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.


2021 ◽  
Vol 10 ◽  
Author(s):  
Juebin Jin ◽  
Haiyan Zhu ◽  
Jindi Zhang ◽  
Yao Ai ◽  
Ji Zhang ◽  
...  

Few studies have reported the reproducibility and stability of ultrasound (US) images based radiomics features obtained from automatic segmentation in oncology. The purpose of this study is to study the accuracy of automatic segmentation algorithms based on multiple U-net models and their effects on radiomics features from US images for patients with ovarian cancer. A total of 469 US images from 127 patients were collected and randomly divided into three groups: training sets (353 images), validation sets (23 images), and test sets (93 images) for automatic segmentation models building. Manual segmentation of target volumes was delineated as ground truth. Automatic segmentations were conducted with U-net, U-net++, U-net with Resnet as the backbone (U-net with Resnet), and CE-Net. A python 3.7.0 and package Pyradiomics 2.2.0 were used to extract radiomic features from the segmented target volumes. The accuracy of automatic segmentations was evaluated by Jaccard similarity coefficient (JSC), dice similarity coefficient (DSC), and average surface distance (ASD). The reliability of radiomics features were evaluated by Pearson correlation and intraclass correlation coefficients (ICC). CE-Net and U-net with Resnet outperformed U-net and U-net++ in accuracy performance by achieving a DSC, JSC, and ASD of 0.87, 0.79, 8.54, and 0.86, 0.78, 10.00, respectively. A total of 97 features were extracted from the delineated target volumes. The average Pearson correlation was 0.86 (95% CI, 0.83–0.89), 0.87 (95% CI, 0.84–0.90), 0.88 (95% CI, 0.86–0.91), and 0.90 (95% CI, 0.88–0.92) for U-net++, U-net, U-net with Resnet, and CE-Net, respectively. The average ICC was 0.84 (95% CI, 0.81–0.87), 0.85 (95% CI, 0.82–0.88), 0.88 (95% CI, 0.85–0.90), and 0.89 (95% CI, 0.86–0.91) for U-net++, U-net, U-net with Resnet, and CE-Net, respectively. CE-Net based segmentation achieved the best radiomics reliability. In conclusion, U-net based automatic segmentation was accurate enough to delineate the target volumes on US images for patients with ovarian cancer. Radiomics features extracted from automatic segmented targets showed good reproducibility and for reliability further radiomics investigations.


Sign in / Sign up

Export Citation Format

Share Document