brain extraction
Recently Published Documents


TOTAL DOCUMENTS

183
(FIVE YEARS 62)

H-INDEX

24
(FIVE YEARS 5)

2021 ◽  
Vol 15 ◽  
Author(s):  
Li-Ming Hsu ◽  
Shuai Wang ◽  
Lindsay Walton ◽  
Tzu-Wen Winnie Wang ◽  
Sung-Ho Lee ◽  
...  

Brain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation, Pulse-Coupled Neural Network, SHape descriptor selected External Regions after Morphologically filtering, and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, center-of-mass distance, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant Methodological Contribution: We proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Inyoung Bae ◽  
Jong-Hee Chae ◽  
Yeji Han

AbstractIt is challenging to extract the brain region from T2-weighted magnetic resonance infant brain images because conventional brain segmentation algorithms are generally optimized for adult brain images, which have different spatial resolution, dynamic changes of imaging intensity, brain size and shape from infant brain images. In this study, we propose a brain extraction algorithm for infant T2-weighted images. The proposed method utilizes histogram partitioning to separate brain regions from the background image. Then, fuzzy c-means thresholding is performed to obtain a rough brain mask for each image slice, followed by refinement steps. For slices that contain eye regions, an additional eye removal algorithm is proposed to eliminate eyes from the brain mask. By using the proposed method, accurate masks for infant T2-weighted brain images can be generated. For validation, we applied the proposed algorithm and conventional methods to T2 infant images (0–24 months of age) acquired with 2D and 3D sequences at 3T MRI. The Dice coefficients and Precision scores, which were calculated as quantitative measures, showed the highest values for the proposed method as follows: For images acquired with a 2D imaging sequence, the average Dice coefficients were 0.9650 ± 0.006 for the proposed method, 0.9262 ± 0.006 for iBEAT, and 0.9490 ± 0.006 for BET. For the data acquired with a 3D imaging sequence, the average Dice coefficient was 0.9746 ± 0.008 for the proposed method, 0.9448 ± 0.004 for iBEAT, and 0.9622 ± 0.01 for BET. The average Precision was 0.9638 ± 0.009 and 0.9565 ± 0.016 for the proposed method, 0.8981 ± 0.01 and 0.8968 ± 0.008 for iBEAT, and 0.9346 ± 0.014 and 0.9282 ± 0.019 for BET for images acquired with 2D and 3D imaging sequences, respectively, demonstrating that the proposed method could be efficiently used for brain extraction in T2-weighted infant images.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi135-vi136
Author(s):  
Ujjwal Baid ◽  
Sarthak Pati ◽  
Siddhesh Thakur ◽  
Brandon Edwards ◽  
Micah Sheller ◽  
...  

Abstract PURPOSE Robustness and generalizability of artificial intelligent (AI) methods is reliant on the training data size and diversity, which are currently hindered in multi-institutional healthcare collaborations by data ownership and legal concerns. To address these, we introduce the Federated Tumor Segmentation (FeTS) Initiative, as an international consortium using federated learning (FL) for data-private multi-institutional collaborations, where AI models leverage data at participating institutions, without sharing data between them. The initial FeTS use-case focused on detecting brain tumor boundaries in MRI. METHODS The FeTS tool incorporates: 1) MRI pre-processing, including image registration and brain extraction; 2) automatic delineation of tumor sub-regions, by label fusion of pretrained top-performing BraTS methods; 3) tools for manual delineation refinements; 4) model training. 55 international institutions identified local retrospective cohorts of glioblastoma patients. Ground truth was generated using the first 3 FeTS functionality modes as mentioned earlier. Finally, the FL training mode comprises of i) an AI model trained on local data, ii) local model updates shared with an aggregator, which iii) combines updates from all collaborators to generate a consensus model, and iv) circulates the consensus model back to all collaborators for iterative performance improvements. RESULTS The first FeTS consensus model, from 23 institutions with data of 2,200 patients, showed an average improvement of 11.1% in the performance of the model on each collaborator’s validation data, when compared to a model trained on the publicly available BraTS data (n=231). CONCLUSION Our findings support that data increase alone would lead to AI performance improvements without any algorithmic development, hence indicating that the model performance would improve further when trained with all 55 collaborating institutions. FL enables AI model training with knowledge from data of geographically-distinct collaborators, without ever having to share any data, hence overcoming hurdles relating to legal, ownership, and technical concerns of data sharing.


2021 ◽  
Author(s):  
Guohui Ruan ◽  
Jiaming Liu ◽  
Ziqi An ◽  
Kaiibin Wu ◽  
Chuanjun Tong ◽  
...  

Skull stripping is an initial and critical step in the pipeline of mouse fMRI analysis. Manual labeling of the brain usually suffers from intra- and inter-rater variability and is highly time-consuming. Hence, an automatic and efficient skull-stripping method is in high demand for mouse fMRI studies. In this study, we investigated a 3D U-Net based method for automatic brain extraction in mouse fMRI studies. Two U-Net models were separately trained on T2-weighted anatomical images and T2*-weighted functional images. The trained models were tested on both interior and exterior datasets. The 3D U-Net models yielded a higher accuracy in brain extraction from both T2-weighted images (Dice > 0.984, Jaccard index > 0.968 and Hausdorff distance < 7.7) and T2*-weighted images (Dice > 0.964, Jaccard index > 0.931 and Hausdorff distance < 3.3), compared with the two widely used mouse skull-stripping methods (RATS and SHERM). The resting-state fMRI results using automatic segmentation with the 3D U-Net models are identical to those obtained by manual segmentation for both the seed-based and group independent component analysis. These results demonstrate that the 3D U-Net based method can replace manual brain extraction in mouse fMRI analysis.


2021 ◽  
Author(s):  
Li-Ming Hsu ◽  
Shuai Wang ◽  
Lindsay Walton ◽  
Tzu-Wen Winnie Wang ◽  
Sung-Ho Lee ◽  
...  

AbstractBrain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation (RATS), Pulse-Coupled Neural Network (PCNN), SHape descriptor selected External Regions after Morphologically filtering (SHERM), and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant methodological contributionWe proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.


2021 ◽  
Author(s):  
Seth B. Boren ◽  
Sean I. Savitz ◽  
Timothy M. Elimore ◽  
Christin Silos ◽  
Sarah George ◽  
...  

Abstract The primary aim of the research was to compare the impact of post-ischemic and hemorrhagic stroke on brain connectivity and recovery using resting-state functional magnetic resonance imaging (rsfMRI). We serially imaged 20 stroke patients, ten with ischemic (IS) and 10 with intracerebral hemorrhage (ICH), at 1, 3, and 12 months after ictus. Data from ten healthy volunteers were obtained from a publically available imaging dataset. All functional and structural images underwent standard processing for brain extraction, realignment, serial registration, unwrapping, and de-noising using SPM12. A seed-based group analysis using CONN software was used to evaluate the Default Mode (DMN) and the Sensorimotor Network (SMN) connections by applying bivariate correlation and hemodynamic response function (hrf) weighting. In comparison to healthy controls (HC), both IS and ICH exhibited disrupted interactions (decreased connectivity) between these two networks at 1M. Interactions then increased by 12M in each group. Temporally, each group exhibited a minimal increase in connectivity at 3M as compared to 12M. Overall, the ICH patients exhibited a greater magnitude of connectivity disruption compared to IS patients, despite a significant intra-subject reduction in hematoma volume. We did not observe any significant correlation between change in connectivity and recovery as measured on the National Institute Stroke Scale (NIHSS) at any time point. These finding demonstrate that largest changes in functional connectivity occur earlier (3M) rather than later (12M) and show subtle differences between IS and ICH during recovery and should be explored further in larger samples.


2021 ◽  
Author(s):  
Herng‐Hua Chang ◽  
Shin‐Joe Yeh ◽  
Ming‐Chang Chiang ◽  
Sung‐Tsang Hsieh

2021 ◽  
pp. 549-558
Author(s):  
Sidney Pontes-Filho ◽  
Annelene Gulden Dahl ◽  
Stefano Nichele ◽  
Gustavo Borges Moreno e Mello

Author(s):  
Sergi Valverde ◽  
Llucia Coll ◽  
Liliana Valencia ◽  
Albert Clèrigues ◽  
Arnau Oliver ◽  
...  

2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 2041-2041
Author(s):  
Francesco Sforazzini ◽  
Patrick Salome ◽  
Andreas Kudak ◽  
Matthias Ulrich ◽  
Laila König ◽  
...  

2041 Background: Efficacy of PD-L1 immune checkpoint inhibitor therapy in Glioblastoma multiforme (GBM) is limited and the prognostic value of PD-L1 in GBM is an active field of research. We therefore sought to identify a MR based surrogate for PD-L1 expression in GBM. Methods: T1 post contrast images (T1ce) acquired immediately before surgery of 121 subjects with primary GBM (RTK I, II and mesenchymal subtype, as determined from Illumina Human Methylation array data) were analyzed. Following standard pre-processing (bias filed correction, brain extraction), 1150 radiomics features were calculated from gross tumor volumes (GTV). The cohort was then divided into training/validation (70%/30%). Cross-validation and model-selection were applied to identify features associated with PD-L1-M expression (estimated from methylation data and highly correlated with PD-L1 RNA-sequencing based measure, as recently reported). Features were used to identify two groups of tumors differing in PD-L1-M expression (PD-L1-R high and low), for which a logistic regression model was trained. Overall survival was assessed between PD-L1-R high/low. Results: PD-L1-R high and low groups showed significant differences in PD-L1-M values (training: p=0.002, validation: p=0.04, full cohort: p<0.001). The same model was used to split tumors into 2 groups, using features from non-T1ce sequences. All of the tested MR modalities showed at least a trend in PD-L1-M values in the two groups (T2w: p=0.037, T1w: p=0.089, FLAIR: p=0.091). Further investigations on the whole cohort (121 subjects) showed that PD-L1-R low group was enriched for RTK II sub-type (48%), while PD-L1-R high for mesenchymal sub-type (48%). Refer to Table 1 for more information. In addition, evaluation of survival data showed a difference in overall survival (likelihood ratio test p value 0.048, OR 0.52, 95% CI [0.27; 0.98]), with the PD-L1-R high group having a better prognosis. Conclusions: We presented a radiomics model PD-L1-R which allowed to identify GBM with low and high PD-L1-M expression from T1 post contrast agent images. Future work should validate these findings in independent cohorts.[Table: see text]


Sign in / Sign up

Export Citation Format

Share Document