An artificial intelligence framework for automatic segmentation and volumetry of vestibular schwannomas from contrast-enhanced T1-weighted and high-resolution T2-weighted MRI

2019 ◽  
pp. 1-9 ◽  
Author(s):  
Jonathan Shapey ◽  
Guotai Wang ◽  
Reuben Dorent ◽  
Alexis Dimitriadis ◽  
Wenqi Li ◽  
...  

OBJECTIVEAutomatic segmentation of vestibular schwannomas (VSs) from MRI could significantly improve clinical workflow and assist in patient management. Accurate tumor segmentation and volumetric measurements provide the best indicators to detect subtle VS growth, but current techniques are labor intensive and dedicated software is not readily available within the clinical setting. The authors aim to develop a novel artificial intelligence (AI) framework to be embedded in the clinical routine for automatic delineation and volumetry of VS.METHODSImaging data (contrast-enhanced T1-weighted [ceT1] and high-resolution T2-weighted [hrT2] MR images) from all patients meeting the study’s inclusion/exclusion criteria who had a single sporadic VS treated with Gamma Knife stereotactic radiosurgery were used to create a model. The authors developed a novel AI framework based on a 2.5D convolutional neural network (CNN) to exploit the different in-plane and through-plane resolutions encountered in standard clinical imaging protocols. They used a computational attention module to enable the CNN to focus on the small VS target and propose a supervision on the attention map for more accurate segmentation. The manually segmented target tumor volume (also tested for interobserver variability) was used as the ground truth for training and evaluation of the CNN. We quantitatively measured the Dice score, average symmetric surface distance (ASSD), and relative volume error (RVE) of the automatic segmentation results in comparison to manual segmentations to assess the model’s accuracy.RESULTSImaging data from all eligible patients (n = 243) were randomly split into 3 nonoverlapping groups for training (n = 177), hyperparameter tuning (n = 20), and testing (n = 46). Dice, ASSD, and RVE scores were measured on the testing set for the respective input data types as follows: ceT1 93.43%, 0.203 mm, 6.96%; hrT2 88.25%, 0.416 mm, 9.77%; combined ceT1/hrT2 93.68%, 0.199 mm, 7.03%. Given a margin of 5% for the Dice score, the automated method was shown to achieve statistically equivalent performance in comparison to an annotator using ceT1 images alone (p = 4e−13) and combined ceT1/hrT2 images (p = 7e−18) as inputs.CONCLUSIONSThe authors developed a robust AI framework for automatically delineating and calculating VS tumor volume and have achieved excellent results, equivalent to those achieved by an independent human annotator. This promising AI technology has the potential to improve the management of patients with VS and potentially other brain tumors.

2021 ◽  
Vol 3 (Supplement_1) ◽  
pp. i1-i1
Author(s):  
Gilbert Hangel ◽  
Cornelius Cadrien ◽  
Philipp Lazen ◽  
Sukrit Sharma ◽  
Julia Furtner ◽  
...  

Abstract OBJECTIVES Neurosurgical resection in gliomas depends on the precise preoperative definition of the tumor and its margins to realize a safe maximum resection that translates into a better patient outcome. New metabolic imaging techniques could improve this delineation as well as designate targets for biopsies. We validated the performance of our fast high-resolution whole-brain 3D-magnetic resonance spectroscopic imaging (MRSI) method at 7T in high-grade gliomas (HGGs) as first step to this regard. METHODS We measured 23 patients with HGGs at 7T with MRSI covering the whole cerebrum with 3.4mm isotropic resolution in 15 min. Quantification used a basis-set of 17 neurochemical components. They were evaluated for their reliability/quality and compared to neuroradiologically segmented tumor regions-of-interest (necrosis, contrast-enhanced, non-contrast-enhanced+edema, peritumoral) and histopathology (e.g., grade, IDH-status). RESULTS We found 18/23 measurements to be usable and ten neurochemicals quantified with acceptable quality. The most common denominators were increases of glutamine, glycine, and total choline as well as decreases of N-acetyl-aspartate and total creatine over most tumor regions. Other metabolites like taurine and serine showed mixed behavior. We further found that heterogeneity in the metabolic images often continued into the peritumoral region. While 2-hydroxy-glutarate could not be satisfyingly quantified, we found a tendency for a decrease of glutamate in IDH1-mutant HGGs. DISCUSSION Our findings corresponded well to clinical tumor segmentation but were more heterogeneous and often extended into the peritumoral region. Our results corresponded to previous knowledge, but with previously not feasible resolution. Apart from glycine/glutamine and their role in glioma progression, more research on the connection of glutamate and others to specific mutations is necessary. The addition of low-grade gliomas and statistical ROI analysis in a larger cohort will be the next important steps to define the benefits of our 7T MRSI approach for the definition of spatial metabolic tumor profiles.


2020 ◽  
Author(s):  
Elisabeth Pfaehler ◽  
Liesbet Mesotten ◽  
Gem Kramer ◽  
Michiel Thomeer ◽  
Karolien Vanhove ◽  
...  

Abstract Background: Positron Emission Tomography (PET) is routinely used for cancer staging and treatment follow up. Metabolic active tumor volume (MATV) as well as total MATV (TMATV - including primary tumor, lymph nodes and metastasis) and/or total lesion glycolysis (TLG) derived from PET images have been identified as prognostic factor or for the evaluation of treatment efficacy in cancer patients. To this end, a segmentation approach with high precision and repeatability is important. However, the implementation of a repeatable and accurate segmentation algorithm remains an ongoing challenge. Methods: In this study, we compare two semi-automatic artificial intelligence (AI) based segmentation methods with conventional semi-automatic segmentation approaches in terms of repeatability. One segmentation approach is based on a textural feature (TF) segmentation approach designed for accurate and repeatable segmentation of primary tumors and metastasis. Moreover, a Convolutional Neural Network (CNN) is trained. The algorithms are trained, validated and tested using a lung cancer PET dataset. The segmentation accuracy of both segmentation approaches is compared using the Jaccard Coefficient (JC). Additionally, the approaches are externally tested on a fully independent test-retest dataset. The repeatability of the methods is compared with those of two majority vote (MV2, MV3) approaches, 41%SUVMAX, and a SUV>4 segmentation (SUV4). Repeatability is assessed with test-retest coefficients (TRT%) and intraclass correlation coefficient (ICC). An ICC>0.9 was regarded as representing excellent repeatability.Results: The accuracy of the segmentations with the reference segmentation was good (JC median TF: 0.7, CNN: 0.73) Both segmentation approaches outperformed most other conventional segmentation methods in terms of test-retest coefficient (TRT% mean: TF: 13.0%, CNN: 13.9%, MV2: 14.1%, MV3: 28.1%, 41%SUVMAX: 28.1%, SUV4: 18.1% ) and ICC (TF: 0.98, MV2: 0.97, CNN: 0.99, MV3: 0.73, SUV4: 0.81, and 41%SUVMAX: 0.68).Conclusion: The semi-automatic AI based segmentation approaches used in this study provided better repeatability than conventional segmentation approaches. Moreover, both algorithms lead to accurate segmentations for both primary tumors as well as metastasis and are therefore good candidates for PET tumor segmentation.


2008 ◽  
Vol 01 (01) ◽  
pp. 17-28 ◽  
Author(s):  
MARCO RUGGERI ◽  
HASSAN WEHBE ◽  
GAVRIIL TSECHPENAKIS ◽  
SHULIANG JIAO ◽  
MARIA ELENA JOCKOVICH ◽  
...  

An ultra high resolution spectral-domain optical coherence tomography (SD-OCT) together with an advanced animal restraint and positioning system was built for noninvasive non-contact in vivo three-dimensional imaging of rodent models of ocular diseases. The animal positioning system allowed the operator to rapidly locate and switch the areas of interest on the retina. This function together with the capability of precise spatial registration provided by the generated OCT fundus image allows the system to locate and compare the same lesion (retinal tumor in the current study) at different time point throughout the entire course of the disease progression. An algorithm for fully automatic segmentation of the tumor boundaries and calculation of tumor volume was developed. The system and algorithm were successfully applied to monitoring retinal tumor growth quantitatively over time in the LHBETATAG mouse model of retinoblastoma.


2020 ◽  
Author(s):  
Elisabeth Pfaehler ◽  
Liesbet Mesotten ◽  
Gem Kramer ◽  
Michiel Thomeer ◽  
Karolien Vanhove ◽  
...  

Abstract Background: Positron Emission Tomography (PET) is routinely used for cancer staging and treatment follow up. Metabolic active tumor volume (MATV) as well as total MATV (TMATV - including primary tumor, lymph nodes and metastasis) and/or total lesion glycolysis (TLG) derived from PET images have been identified as prognostic factor or for the evaluation of treatment efficacy in cancer patients. To this end, a segmentation approach with high precision and repeatability is important. However, the implementation of a repeatable and accurate segmentation algorithm remains an ongoing challenge. Methods: In this study, we compare two semi-automatic artificial intelligence (AI) based segmentation methods with conventional semi-automatic segmentation approaches in terms of repeatability. One segmentation approach is based on a textural feature (TF) segmentation approach designed for accurate and repeatable segmentation of primary tumors and metastasis. Moreover, a Convolutional Neural Network (CNN) is trained. The algorithms are trained, validated and tested using a lung cancer PET dataset. The segmentation accuracy of both segmentation approaches is compared using the Jaccard Coefficient (JC). Additionally, the approaches are externally tested on a fully independent test-retest dataset. The repeatability of the methods is compared with those of two majority vote (MV2, MV3) approaches, 41%SUVMAX, and a SUV>4 segmentation (SUV4). Repeatability is assessed with test-retest coefficients (TRT%) and intraclass correlation coefficient (ICC). An ICC>0.9 was regarded as representing excellent repeatability.Results: The accuracy of the segmentations with the reference segmentation was good (JC median TF: 0.7, CNN: 0.73) Both segmentation approaches outperformed most other conventional segmentation methods in terms of test-retest coefficient (TRT% mean: TF: 13.0%, CNN: 13.9%, MV2: 14.1%, MV3: 28.1%, 41%SUVMAX: 28.1%, SUV4: 18.1% ) and ICC (TF: 0.98, MV2: 0.97, CNN: 0.99, MV3: 0.73, SUV4: 0.81, and 41%SUVMAX: 0.68).Conclusion: The semi-automatic AI based segmentation approaches used in this study provided better repeatability than conventional segmentation approaches. Moreover, both algorithms lead to accurate segmentations for both primary tumors as well as metastasis and are therefore good candidates for PET tumor segmentation.


2017 ◽  
Vol 127 (4) ◽  
pp. 798-806 ◽  
Author(s):  
Raphael Meier ◽  
Nicole Porz ◽  
Urspeter Knecht ◽  
Tina Loosli ◽  
Philippe Schucht ◽  
...  

OBJECTIVEIn the treatment of glioblastoma, residual tumor burden is the only prognostic factor that can be actively influenced by therapy. Therefore, an accurate, reproducible, and objective measurement of residual tumor burden is necessary. This study aimed to evaluate the use of a fully automatic segmentation method—brain tumor image analysis (BraTumIA)—for estimating the extent of resection (EOR) and residual tumor volume (RTV) of contrast-enhancing tumor after surgery.METHODSThe imaging data of 19 patients who underwent primary resection of histologically confirmed supratentorial glioblastoma were retrospectively reviewed. Contrast-enhancing tumors apparent on structural preoperative and immediate postoperative MR imaging in this patient cohort were segmented by 4 different raters and the automatic segmentation BraTumIA software. The manual and automatic results were quantitatively compared.RESULTSFirst, the interrater variabilities in the estimates of EOR and RTV were assessed for all human raters. Interrater agreement in terms of the coefficient of concordance (W) was higher for RTV (W = 0.812; p < 0.001) than for EOR (W = 0.775; p < 0.001). Second, the volumetric estimates of BraTumIA for all 19 patients were compared with the estimates of the human raters, which showed that for both EOR (W = 0.713; p < 0.001) and RTV (W = 0.693; p < 0.001) the estimates of BraTumIA were generally located close to or between the estimates of the human raters. No statistically significant differences were detected between the manual and automatic estimates. BraTumIA showed a tendency to overestimate contrast-enhancing tumors, leading to moderate agreement with expert raters with respect to the literature-based, survival-relevant threshold values for EOR.CONCLUSIONSBraTumIA can generate volumetric estimates of EOR and RTV, in a fully automatic fashion, which are comparable to the estimates of human experts. However, automated analysis showed a tendency to overestimate the volume of a contrast-enhancing tumor, whereas manual analysis is prone to subjectivity, thereby causing considerable interrater variability.


Computer vision techniques and development of computer-aided tools are evolving as the areas of research for automatic segmentation of brain tumors. Some of these techniques showed good results but there is no winning technique as these approaches have often not used practically in hospitals. In these days, research on medical healthcare system [1] is an emerging area and main focused on the designing of an efficient segmentation approach with concept of Artificial Intelligence (AI) techniques for appropriate region and fast segmentation purpose. There are a lots of clustering as well as traditional segmentation approaches are available for medical images, but most of them are depended on the data types. In this paper, we presented a brief review on clustering-based medical image segmentation with their challenging factors faced by researchers [2]. Due to high success rate of AI, Deep Learning (DL) algorithms, there has been a considerable amount of brain tumor segmentation works are aimed by researcher and try to solve the exiting challenges. In this survey, various type of brain tumor segmentation and detection system are analyzed to find out the exact tumor location and faced issued by the researchers. In Addition, some challenging factors are also analyzed with various algorithms of segmentation such DL, K-means clustering, Optimization and traditional approaches.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Elisabeth Pfaehler ◽  
Liesbet Mesotten ◽  
Gem Kramer ◽  
Michiel Thomeer ◽  
Karolien Vanhove ◽  
...  

Abstract Background Positron emission tomography (PET) is routinely used for cancer staging and treatment follow-up. Metabolic active tumor volume (MATV) as well as total MATV (TMATV—including primary tumor, lymph nodes and metastasis) and/or total lesion glycolysis derived from PET images have been identified as prognostic factor or for the evaluation of treatment efficacy in cancer patients. To this end, a segmentation approach with high precision and repeatability is important. However, the implementation of a repeatable and accurate segmentation algorithm remains an ongoing challenge. Methods In this study, we compare two semi-automatic artificial intelligence (AI)-based segmentation methods with conventional semi-automatic segmentation approaches in terms of repeatability. One segmentation approach is based on a textural feature (TF) segmentation approach designed for accurate and repeatable segmentation of primary tumors and metastasis. Moreover, a convolutional neural network (CNN) is trained. The algorithms are trained, validated and tested using a lung cancer PET dataset. The segmentation accuracy of both segmentation approaches is compared using the Jaccard coefficient (JC). Additionally, the approaches are externally tested on a fully independent test–retest dataset. The repeatability of the methods is compared with those of two majority vote (MV2, MV3) approaches, 41%SUVMAX, and a SUV > 4 segmentation (SUV4). Repeatability is assessed with test–retest coefficients (TRT%) and intraclass correlation coefficient (ICC). An ICC > 0.9 was regarded as representing excellent repeatability. Results The accuracy of the segmentations with the reference segmentation was good (JC median TF: 0.7, CNN: 0.73). Both segmentation approaches outperformed most other conventional segmentation methods in terms of test–retest coefficient (TRT% mean: TF: 13.0%, CNN: 13.9%, MV2: 14.1%, MV3: 28.1%, 41%SUVMAX: 28.1%, SUV4: 18.1%) and ICC (TF: 0.98, MV2: 0.97, CNN: 0.99, MV3: 0.73, SUV4: 0.81, and 41%SUVMAX: 0.68). Conclusion The semi-automatic AI-based segmentation approaches used in this study provided better repeatability than conventional segmentation approaches. Moreover, both algorithms lead to accurate segmentations for both primary tumors as well as metastasis and are therefore good candidates for PET tumor segmentation.


Author(s):  
Chung-Ching Lin ◽  
Franco Stellari ◽  
Lynne Gignac ◽  
Peilin Song ◽  
John Bruley

Abstract Transmission Electron Microscopy (TEM) and scanning TEM (STEM) is widely used to acquire ultra high resolution images in different research areas. For some applications, a single TEM/STEM image does not provide enough information for analysis. One example in VLSI circuit failure analysis is the tracking of long interconnection. The capability of creating a large map of high resolution images may enable significant progress in some tasks. However, stitching TEM/STEM images in semiconductor applications is difficult and existing tools are unable to provide usable stitching results for analysis. In this paper, a novel fully automated method for stitching TEM/STEM image mosaics is proposed. The proposed method allows one to reach a global optimal configuration of each image tile so that both missing and false-positive correspondences can be tolerated. The experiment results presented in this paper show that the proposed method is robust and performs well in very challenging situations.


2021 ◽  
Vol 11 (4) ◽  
pp. 1880
Author(s):  
Roberta Fusco ◽  
Adele Piccirillo ◽  
Mario Sansone ◽  
Vincenza Granata ◽  
Paolo Vallone ◽  
...  

Purpose: The aim of the study was to estimate the diagnostic accuracy of textural, morphological and dynamic features, extracted by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) images, by carrying out univariate and multivariate statistical analyses including artificial intelligence approaches. Methods: In total, 85 patients with known breast lesion were enrolled in this retrospective study according to regulations issued by the local Institutional Review Board. All patients underwent DCE-MRI examination. The reference standard was pathology from a surgical specimen for malignant lesions and pathology from a surgical specimen or fine needle aspiration cytology, core or Tru-Cut needle biopsy for benign lesions. In total, 91 samples of 85 patients were analyzed. Furthermore, 48 textural metrics, 15 morphological and 81 dynamic parameters were extracted by manually segmenting regions of interest. Statistical analyses including univariate and multivariate approaches were performed: non-parametric Wilcoxon–Mann–Whitney test; receiver operating characteristic (ROC), linear classifier (LDA), decision tree (DT), k-nearest neighbors (KNN), and support vector machine (SVM) were utilized. A balancing approach and feature selection methods were used. Results: The univariate analysis showed low accuracy and area under the curve (AUC) for all considered features. Instead, in the multivariate textural analysis, the best performance (accuracy (ACC) = 0.78; AUC = 0.78) was reached with all 48 metrics and an LDA trained with balanced data. The best performance (ACC = 0.75; AUC = 0.80) using morphological features was reached with an SVM trained with 10-fold cross-variation (CV) and balanced data (with adaptive synthetic (ADASYN) function) and a subset of five robust morphological features (circularity, rectangularity, sphericity, gleaning and surface). The best performance (ACC = 0.82; AUC = 0.83) using dynamic features was reached with a trained SVM and balanced data (with ADASYN function). Conclusion: Multivariate analyses using pattern recognition approaches, including all morphological, textural and dynamic features, optimized by adaptive synthetic sampling and feature selection operations obtained the best results and showed the best performance in the discrimination of benign and malignant lesions.


Sign in / Sign up

Export Citation Format

Share Document