shape priors
Recently Published Documents


TOTAL DOCUMENTS

269
(FIVE YEARS 41)

H-INDEX

25
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Jingwen Wang ◽  
Martin Runz ◽  
Lourdes Agapito
Keyword(s):  

2021 ◽  
Vol 3 ◽  
Author(s):  
Pulkit Khandelwal ◽  
D. Louis Collins ◽  
Kaleem Siddiqi

The surgical treatment of injuries to the spine often requires the placement of pedicle screws. To prevent damage to nearby blood vessels and nerves, the individual vertebrae and their surrounding tissue must be precisely localized. To aid surgical planning in this context we present a clinically applicable geometric flow based method to segment the human spinal column from computed tomography (CT) scans. We first apply anisotropic diffusion and flux computation to mitigate the effects of region inhomogeneities and partial volume effects at vertebral boundaries in such data. The first pipeline of our segmentation approach uses a region-based geometric flow, requires only a single manually identified seed point to initiate, and runs efficiently on a multi-core central processing unit (CPU). A shape-prior formulation is employed in a separate second pipeline to segment individual vertebrae, using both region and boundary based terms to augment the initial segmentation. We validate our method on four different clinical databases, each of which has a distinct intensity distribution. Our approach obviates the need for manual segmentation, significantly reduces inter- and intra-observer differences, runs in times compatible with use in a clinical workflow, achieves Dice scores that are comparable to the state of the art, and yields precise vertebral surfaces that are well within the acceptable 2 mm mark for surgical interventions.


Author(s):  
Haigen Hu ◽  
Aizhu Liu ◽  
Qianwei Zhou ◽  
Qiu Guan ◽  
Xiaoxin Li ◽  
...  

Author(s):  
Andrei Iantsen ◽  
Marta Ferreira ◽  
Francois Lucia ◽  
Vincent Jaouen ◽  
Caroline Reinhold ◽  
...  

Abstract Purpose In this work, we addressed fully automatic determination of tumor functional uptake from positron emission tomography (PET) images without relying on other image modalities or additional prior constraints, in the context of multicenter images with heterogeneous characteristics. Methods In cervical cancer, an additional challenge is the location of the tumor uptake near or even stuck to the bladder. PET datasets of 232 patients from five institutions were exploited. To avoid unreliable manual delineations, the ground truth was generated with a semi-automated approach: a volume containing the tumor and excluding the bladder was first manually determined, then a well-validated, semi-automated approach relying on the Fuzzy locally Adaptive Bayesian (FLAB) algorithm was applied to generate the ground truth. Our model built on the U-Net architecture incorporates residual blocks with concurrent spatial squeeze and excitation modules, as well as learnable non-linear downsampling and upsampling blocks. Experiments relied on cross-validation (four institutions for training and validation, and the fifth for testing). Results The model achieved good Dice similarity coefficient (DSC) with little variability across institutions (0.80 ± 0.03), with higher recall (0.90 ± 0.05) than precision (0.75 ± 0.05) and improved results over the standard U-Net (DSC 0.77 ± 0.05, recall 0.87 ± 0.02, precision 0.74 ± 0.08). Both vastly outperformed a fixed threshold at 40% of SUVmax (DSC 0.33 ± 0.15, recall 0.52 ± 0.17, precision 0.30 ± 0.16). In all cases, the model could determine the tumor uptake without including the bladder. Neither shape priors nor anatomical information was required to achieve efficient training. Conclusion The proposed method could facilitate the deployment of a fully automated radiomics pipeline in such a challenging multicenter context.


2021 ◽  
Vol 40 (1) ◽  
pp. 53-63
Author(s):  
Xin Sun ◽  
Dong Li ◽  
Wei Wang ◽  
Hongxun Yao ◽  
Dongliang Xu ◽  
...  

 We present a novel graph cut method for iterated segmentation of objects with specific shape bias (SBGC). In contrast with conventional graph cut models which emphasize the regional appearance only, the proposed SBGC takes the shape preference of the interested object into account to drive the segmentation. Therefore, the SBGC can ensure a more accurate convergence to the interested object even in complicated conditions where the appearance cues are inadequate for object/background discrimination. In particular, we firstly evaluate the segmentation by simultaneously considering its global shape and local edge consistencies with the object shape priors. Then these two cues are formulated into a graph cut framework to seek the optimal segmentation that maximizing both of the global and local measurements. By iteratively implementing the optimization, the proposed SBGC can achieve joint estimation of the optimal segmentation and the most likely object shape encoded by the shape priors, and eventually converge to the candidate result with maximum consistency between these two estimations. Finally, we take the ellipse shape objects with various segmentation challenges as examples for evaluation. Competitive results compared with state-of-the-art methods validate the effectiveness of the technique.


2020 ◽  
Vol 2020 (48) ◽  
pp. 86-91
Author(s):  
T.S. Mandziy ◽  

Approach to efficient level-set model with shape priors for images segmentation is considered. The use of edge based level-set model in combination with principal component analysis (PCA) based shape priors for image segmentation is investigated. Shape priors considered as a tool to cope with proper segmentation of overlapping or partially visible objects on input image. It is argued that in some cases consequent optimization of different groups of parameters can be advantageous in comparison to simultaneous optimization of all parameters. The approach was applied for segmentation of fractographic images obtained by scanning electron microscope (SEM). Experimental results for image segmentation using level-set model with shape priors are presented.


Sign in / Sign up

Export Citation Format

Share Document