scholarly journals Accuracy and practical aspects of semi- and fully automatic segmentation methods for resected brain areas

2020 ◽  
Vol 62 (12) ◽  
pp. 1637-1648
Author(s):  
Karin Gau ◽  
Charlotte S. M. Schmidt ◽  
Horst Urbach ◽  
Josef Zentner ◽  
Andreas Schulze-Bonhage ◽  
...  

Abstract Purpose Precise segmentation of brain lesions is essential for neurological research. Specifically, resection volume estimates can aid in the assessment of residual postoperative tissue, e.g. following surgery for glioma. Furthermore, behavioral lesion-symptom mapping in epilepsy relies on accurate delineation of surgical lesions. We sought to determine whether semi- and fully automatic segmentation methods can be applied to resected brain areas and which approach provides the most accurate and cost-efficient results. Methods We compared a semi-automatic (ITK-SNAP) with a fully automatic (lesion_GNB) method for segmentation of resected brain areas in terms of accuracy with manual segmentation serving as reference. Additionally, we evaluated processing times of all three methods. We used T1w, MRI-data of epilepsy patients (n = 27; 11 m; mean age 39 years, range 16–69) who underwent temporal lobe resections (17 left). Results The semi-automatic approach yielded superior accuracy (p < 0.001) with a median Dice similarity coefficient (mDSC) of 0.78 and a median average Hausdorff distance (maHD) of 0.44 compared with the fully automatic approach (mDSC 0.58, maHD 1.32). There was no significant difference between the median percent volume difference of the two approaches (p > 0.05). Manual segmentation required more human input (30.41 min/subject) and therefore inferring significantly higher costs than semi- (3.27 min/subject) or fully automatic approaches (labor and cost approaching zero). Conclusion Semi-automatic segmentation offers the most accurate results in resected brain areas with a moderate amount of human input, thus representing a viable alternative compared with manual segmentation, especially for studies with large patient cohorts.

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257371
Author(s):  
Daisuke Nishiyama ◽  
Hiroshi Iwasaki ◽  
Takaya Taniguchi ◽  
Daisuke Fukui ◽  
Manabu Yamanaka ◽  
...  

Accurate gluteus medius (GMd) volume evaluation may aid in the analysis of muscular atrophy states and help gain an improved understanding of patient recovery via rehabilitation. However, the segmentation of muscle regions in GMd images for cubic muscle volume assessment is time-consuming and labor-intensive. This study automated GMd-region segmentation from the computed tomography (CT) images of patients diagnosed with hip osteoarthritis using deep learning and evaluated the segmentation accuracy. To this end, 5250 augmented pairs of training data were obtained from five participants, and a conditional generative adversarial network was used to identify the relationships between the image pairs. Using the preserved test datasets, the results of automatic segmentation with the trained deep learning model were compared to those of manual segmentation in terms of the dice similarity coefficient (DSC), volume similarity (VS), and shape similarity (MS). As observed, the average DSC values for automatic and manual segmentations were 0.748 and 0.812, respectively, with a significant difference (p < 0.0001); the average VS values were 0.247 and 0.203, respectively, with no significant difference (p = 0.069); and the average MS values were 1.394 and 1.156, respectively, with no significant difference (p = 0.308). The GMd volumes obtained by automatic and manual segmentation were 246.2 cm3 and 282.9 cm3, respectively. The noninferiority of the DSC obtained by automatic segmentation was verified against that obtained by manual segmentation. Accordingly, the proposed GAN-based automatic GMd-segmentation technique is confirmed to be noninferior to manual segmentation. Therefore, the findings of this research confirm that the proposed method not only reduces time and effort but also facilitates accurate assessment of the cubic muscle volume.


2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Chen Huang ◽  
Junru Tian ◽  
Chenglang Yuan ◽  
Ping Zeng ◽  
Xueping He ◽  
...  

Objective. Deep vein thrombosis (DVT) is a disease caused by abnormal blood clots in deep veins. Accurate segmentation of DVT is important to facilitate the diagnosis and treatment. In the current study, we proposed a fully automatic method of DVT delineation based on deep learning (DL) and contrast enhanced magnetic resonance imaging (CE-MRI) images. Methods. 58 patients (25 males; 28~96 years old) with newly diagnosed lower extremity DVT were recruited. CE-MRI was acquired on a 1.5 T system. The ground truth (GT) of DVT lesions was manually contoured. A DL network with an encoder-decoder architecture was designed for DVT segmentation. 8-Fold cross-validation strategy was applied for training and testing. Dice similarity coefficient (DSC) was adopted to evaluate the network’s performance. Results. It took about 1.5s for our CNN model to perform the segmentation task in a slice of MRI image. The mean DSC of 58 patients was 0.74± 0.17 and the median DSC was 0.79. Compared with other DL models, our CNN model achieved better performance in DVT segmentation (0.74± 0.17 versus 0.66±0.15, 0.55±0.20, and 0.57±0.22). Conclusion. Our proposed DL method was effective and fast for fully automatic segmentation of lower extremity DVT.


2011 ◽  
Vol 314-316 ◽  
pp. 1571-1575
Author(s):  
Hui Kang Yang ◽  
Dean Zhao ◽  
Jin Liang Guo ◽  
Jun Zhang ◽  
Bo Chen

Currently, some manual segmentation problems exist in multi-beam waterjet cutting process drawings, such as low efficiency, incomplete segmentation and so on. In order to solve the problems, this paper designs a set of criteria for multi-beam cutting process and proposes the method of partition-classification trim, then uses the secondary development tool which is embedded in AutoCAD to program three user-defined commands, and these commands can partition the drawing by region, classify and trim the entities in the cutting process drawing. Through inputting the three simple commands on the command lines, it can realize automatic segmentation of cutting process drawings. Finally, the feasibility of this method is proved by a segmentation experiment. It can effectively solve the problems which exist in manual segmentation and greatly improve work efficiency.


2021 ◽  
Vol 22 (Supplement_2) ◽  
Author(s):  
S Alabed ◽  
K Karunasaagarar ◽  
F Alandejani ◽  
P Garg ◽  
J Uthoff ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Foundation. Main funding source(s): Wellcome Trust (UK), NIHR (UK) Introduction Cardiac magnetic resonance (CMR) measurements have significant diagnostic and prognostic value. Accurate and repeatable measurements are essential to assess disease severity, evaluate therapy response and monitor disease progression. Deep learning approaches have shown promise for automatic left ventricular (LV) segmentation on CMR, however fully automatic right ventricular (RV) segmentation remains challenging. We aimed to develop a biventricular automatic contouring model and evaluate the interstudy repeatability of the model in a prospectively recruited cohort. Methods A deep learning CMR contouring model was developed in a retrospective multi-vendor (Siemens and General Electric), multi-pathology cohort of patients, predominantly with heart failure, pulmonary hypertension and lung diseases (n = 400, ASPIRE registry). Biventricular segmentations were made on all CMR studies across cardiac phases. To test the accuracy of the automatic segmentation, 30 ASPIRE CMRs were segmented independently by two CMR experts. Each segmentation was compared to the automatic contouring with agreement assessed using the Dice similarity coefficient (DSC).  A prospective validation cohort of 46 subjects (10 healthy volunteers and 36 patients with pulmonary hypertension) were recruited to assess interstudy agreement of automatic and manual CMR assessments. Two CMR studies were performed during separate sessions on the same day. Interstudy repeatability was assessed using intraclass correlation coefficient (ICC) and Bland-Altman plots.  Results DSC showed high agreement (figure 1) comparing automatic and expert CMR readers, with minimal bias towards either CMR expert. The scan-scan repeatability CMR measurements were higher for all automatic RV measurements (ICC 0.89 to 0.98) compared to manual RV measurements (0.78 to 0.98). LV automatic and manual measurements were similarly repeatable (figure 2). Bland-Altman plots showed strong agreement with small mean differences between the scan-scan measurements (figure 2). Conclusion Fully automatic biventricular short-axis segmentations are comparable with expert manual segmentations, and have shown excellent interstudy repeatability.


2021 ◽  
Author(s):  
Pankaj Eknath Kasar ◽  
Shivajirao M. Jadhav ◽  
Vineet Kansal

Abstract The tumor detection is major challenging task in brain tumor quantitative evaluation. In recent years, owing to non-invasive and strong soft tissue comparison, Magnetic Resonance Imaging (MRI) has gained great interest. MRI is a commonly used image modality technique to locate brain tumors. An immense amount of data is produced by the MRI. Heterogeneity, isointense and hypointense tumor properties restrict manual segmentation in a fair period of time, thus restricting the use of reliable quantitative measures in clinical practice. In the clinical practice manual segmentation task is quite time consuming and their performance is highly depended on the operator’s experience. Accurate and automated tumor segmentation techniques are also needed; however, the severe spatial and structural heterogeneity of brain tumors makes automatic segmentation a difficult job. This paper proposes fully automatic segmentation of brain tumors using encoder-decoder based convolutional neural networks. The paper focuses on well-known semantic segmentation deep neural networks i.e., UNET and SEGNET for segmenting tumors from Brain MRI images. The networks are trained and tested using freely accessible standard dataset, with Dice Similarity Coefficient (DSC) as metric for whole predicted image i.e., including tumor and background. UNET’s average DSC on test dataset is 0.76 whereas for SEGNET we got average DSC 0.67. The evaluation of results proves that UNET is having better performance than SEGNET.


Author(s):  
Jialiang Jiang ◽  
Yong Luo ◽  
Feng Wang ◽  
Yuchuan Fu ◽  
Hang Yu ◽  
...  

: Purpose: To evaluate the accuracy and dosimetric effects for auto-segmentation of the CTV for GO in CT images based on FCN. Methods: An FCN-8s network architecture for auto-segmentation was built based on Caffe. CT images of 121 patients with GO who have received radiotherapy at the West China Hospital of Sichuan University were randomly selected for training and testing. Two methods were used to segment the CTV of GO: treating the two-part CTV as a whole anatomical region or considering the two parts of CTV as two independent regions. Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD) were used as evaluation criteria. The auto-segmented contours were imported into the original treatment plan to analysis the dosimetric characteristics. Results: The similarity comparison between manual contours and auto-segmental contours showed an average DSC value up to 0.83. The max HD values for segmenting two parts of CTV separately was a little bit smaller than treating CTV with one label (8.23±2.80 vs. 9.03±2.78). The dosimetric comparison between manual contours and auto-segmental contours showed there was a significant difference (p<0.05) with the lack of dose for auto-segmental CTV. Conclusion: Based on deep learning architecture, the automatic segmentation model for small target area can carry out auto contouring task well. Treating separate parts of one target as different anatomic regions can help to improve the auto-contouring quality. The dosimetric evaluation can provide us with different perspectives for further exploration of automatic sketching tools.


2021 ◽  
Vol 11 ◽  
Author(s):  
Xi Liu ◽  
Kai-Wen Li ◽  
Ruijie Yang ◽  
Li-Sheng Geng

Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets—the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.


2020 ◽  
Author(s):  
Weiwei Ruan ◽  
Xun Sun ◽  
Xuehan Hu ◽  
Fang Liu ◽  
Fan Hu ◽  
...  

Abstract Background:Quantitative analysis of brain positron-emission tomography (PET) depends on structural segmentation, which can be time-consuming and operator-dependent when performed manually. Previous automatic segmentation usually registered subjects’ images onto an atlas template (defined as RSIAT here) for group analysis, which changed the individuals’ images and probably affected regional PET segmentation. In contrast, we could register atlas template to subjects’ images (RATSI), which created an individual atlas template and may be more accurate for PET segmentation. We segmented two representative brain areas in twenty Parkinson disease (PD) and eight multiple system atrophy (MSA) patients performed in hybrid positron-emission tomography/magnetic resonance imaging (PET/MR). The segmentation accuracy was evaluated using the Dice coefficient (DC) and Hausdorff distance (HD). and the standardized uptake value (SUV) measurements of these two automatic segmentation methods were compared, using manual segmentation as a reference. Results:The DC of RATSI increased and the HD decreased significantly (P < 0.05) compared with the RSIAT in PD, while the results of one-way analysis of variance (ANOVA) found no significant differences in the SUVmean and SUVmax among the two automatic and the manual segmentation methods. Further, RATSI was used to compare regional differences in cerebral metabolism pattern between PD and MSA patients. The SUVmean in the segmented cerebellar gray matter for the MSA group was significantly lower compared with the PD group (P<0.05), which is consistent with previous reports. Conclusion:The RATSI was more accurate for the caudate nucleus and putamen automatic segmentation, and can be used for regional PET analysis in hybrid PET/MR.


2020 ◽  
pp. 028418512096391
Author(s):  
Jiachen Du ◽  
Peipeng Liang ◽  
Hongjian He ◽  
Qiqi Tong ◽  
Ting Gong ◽  
...  

Background Multisite studies can considerably increase the pool of normally aging individuals with neurodegenerative disorders and thereby expedite the associated research. Understanding the reproducibility of the parameters of related brain structures—including the hippocampus, amygdala, and entorhinal cortex—in multisite studies is crucial in determining the impact of healthy aging or neurodegenerative diseases. Purpose To estimate the reproducibility of the fascinating structures by automatic (FreeSurfer) and manual segmentation methods in a well-controlled multisite dataset. Material and Methods Three traveling individuals were scanned at 10 sites, which were equipped with the same equipment (3T Prisma Siemens). They used the same scan protocol (two inversion-contrast magnetization-prepared rapid gradient echo sequences) and operators. Validity coefficients (intraclass correlations coefficient [ICC]) and spatial overlap measures (Dice Similarity Coefficient [DSC]) were used to estimate the reproducibility of multisite data. Results ICC and DSC values varied substantially among structures and segmentation methods, and values of manual tracing were relatively higher than the automated method. ICC and DSC values of structural parameters were greater than 0.80 and 0.60 across sites, as determined by manual tracing. Low reproducibility was observed in the amygdala parameters by automatic segmentation method (ICC = 0.349–0.529, DSC = 0.380–0.873). However, ICC and DSC scores of the hippocampus were higher than 0.60 and 0.65 by two segmentation methods. Conclusion This study suggests that a well-controlled multisite study could provide a reliable MRI dataset. Manual tracing of volume assessments is recommended for low reproducibility structures that require high levels of precision in multisite studies.


2020 ◽  
Vol 10 (10) ◽  
pp. 3360
Author(s):  
Mizuho Nishio ◽  
Shunjiro Noguchi ◽  
Koji Fujimoto

Combinations of data augmentation methods and deep learning architectures for automatic pancreas segmentation on CT images are proposed and evaluated. Images from a public CT dataset of pancreas segmentation were used to evaluate the models. Baseline U-net and deep U-net were chosen for the deep learning models of pancreas segmentation. Methods of data augmentation included conventional methods, mixup, and random image cropping and patching (RICAP). Ten combinations of the deep learning models and the data augmentation methods were evaluated. Four-fold cross validation was performed to train and evaluate these models with data augmentation methods. The dice similarity coefficient (DSC) was calculated between automatic segmentation results and manually annotated labels and these were visually assessed by two radiologists. The performance of the deep U-net was better than that of the baseline U-net with mean DSC of 0.703–0.789 and 0.686–0.748, respectively. In both baseline U-net and deep U-net, the methods with data augmentation performed better than methods with no data augmentation, and mixup and RICAP were more useful than the conventional method. The best mean DSC was obtained using a combination of deep U-net, mixup, and RICAP, and the two radiologists scored the results from this model as good or perfect in 76 and 74 of the 82 cases.


Sign in / Sign up

Export Citation Format

Share Document