bone segmentation
Recently Published Documents


TOTAL DOCUMENTS

164
(FIVE YEARS 14)

H-INDEX

16
(FIVE YEARS 0)



2021 ◽  
Vol 9 (11) ◽  
pp. 232596712110469
Author(s):  
Guodong Zeng ◽  
Celia Degonda ◽  
Adam Boschung ◽  
Florian Schmaranzer ◽  
Nicolas Gerber ◽  
...  

Background: Dynamic 3-dimensional (3D) simulation of hip impingement enables better understanding of complex hip deformities in young adult patients with femoroacetabular impingement (FAI). Deep learning algorithms may improve magnetic resonance imaging (MRI) segmentation. Purpose: (1) To evaluate the accuracy of 3D models created using convolutional neural networks (CNNs) for fully automatic MRI bone segmentation of the hip joint, (2) to correlate hip range of motion (ROM) between manual and automatic segmentation, and (3) to compare location of hip impingement in 3D models created using automatic bone segmentation in patients with FAI. Study Design: Cohort study (diagnosis); Level of evidence, 3. Methods: The authors retrospectively reviewed 31 hip MRI scans from 26 symptomatic patients (mean age, 27 years) with hip pain due to FAI. All patients had matched computed tomography (CT) and MRI scans of the pelvis and the knee. CT- and MRI-based osseous 3D models of the hip joint of the same patients were compared (MRI: T1 volumetric interpolated breath-hold examination high-resolution sequence; 0.8 mm3 isovoxel). CNNs were used to develop fully automatic bone segmentation of the hip joint, and the 3D models created using this method were compared with manual segmentation of CT- and MRI-based 3D models. Impingement-free ROM and location of hip impingement were calculated using previously validated collision detection software. Results: The difference between the CT- and MRI-based 3D models was <1 mm, and the difference between fully automatic and manual segmentation of MRI-based 3D models was <1 mm. The correlation of automatic and manual MRI-based 3D models was excellent and significant for impingement-free ROM ( r = 0.995; P < .001), flexion ( r = 0.953; P < .001), and internal rotation at 90° of flexion ( r = 0.982; P < .001). The correlation for impingement-free flexion between automatic MRI-based 3D models and CT-based 3D models was 0.953 ( P < .001). The location of impingement was not significantly different between manual and automatic segmentation of MRI-based 3D models, and the location of extra-articular hip impingement was not different between CT- and MRI-based 3D models. Conclusion: CNN can potentially be used in clinical practice to provide rapid and accurate 3D MRI hip joint models for young patients. The created models can be used for simulation of impingement during diagnosis of intra- and extra-articular hip impingement to enable radiation-free and patient-specific surgical planning for hip arthroscopy and open hip preservation surgery.



2021 ◽  
Author(s):  
Imane Zaimi ◽  
Nabila Zrira ◽  
Ibtissam Benmiloud ◽  
Imad Marzak ◽  
Kawtar Megdiche ◽  
...  


2021 ◽  
Author(s):  
Yaopeng Peng ◽  
Hao Zheng ◽  
Fahim Zaman ◽  
Lichun Zhang ◽  
Xiaodong Wu ◽  
...  

<div>Knee cartilage and bone segmentation is critical for physicians to analyze and diagnose articular damage and knee osteoarthritis (OA). Deep learning (DL) methods for medical image segmentation have largely outperformed traditional methods, but they often need large amounts of annotated data for model training, which is very costly and time-consuming for medical experts, especially on 3D images. In this paper, we report a new knee cartilage and bone segmentation framework, KCB-Net, for 3D MR images based on sparse annotation. KCB-Net selects a small subset of slices from 3D images for annotation, and seeks to bridge the performance gap between sparse annotation and full annotation. Specifically, it first identifies a subset of the most effective and representative slices with an unsupervised scheme; it then trains an ensemble model using the annotated slices; next, it self-trains the model using 3D images containing pseudo-labels generated by the ensemble method and improved by a bi-directional hierarchical earth mover’s distance (bi-HEMD) algorithm; finally, it fine-tunes the segmentation results using the primal-dual Internal Point Method (IPM). Experiments on two 3D MR knee joint datasets (the Iowa dataset and iMorphics dataset) show that our new framework outperforms state-of-the-art methods on full annotation, and yields high quality results even for annotation ratios as low as 5%.<br></div>



2021 ◽  
Author(s):  
Yaopeng Peng ◽  
Hao Zheng ◽  
Fahim Zaman ◽  
Lichun Zhang ◽  
Xiaodong Wu ◽  
...  

<div>Knee cartilage and bone segmentation is critical for physicians to analyze and diagnose articular damage and knee osteoarthritis (OA). Deep learning (DL) methods for medical image segmentation have largely outperformed traditional methods, but they often need large amounts of annotated data for model training, which is very costly and time-consuming for medical experts, especially on 3D images. In this paper, we report a new knee cartilage and bone segmentation framework, KCB-Net, for 3D MR images based on sparse annotation. KCB-Net selects a small subset of slices from 3D images for annotation, and seeks to bridge the performance gap between sparse annotation and full annotation. Specifically, it first identifies a subset of the most effective and representative slices with an unsupervised scheme; it then trains an ensemble model using the annotated slices; next, it self-trains the model using 3D images containing pseudo-labels generated by the ensemble method and improved by a bi-directional hierarchical earth mover’s distance (bi-HEMD) algorithm; finally, it fine-tunes the segmentation results using the primal-dual Internal Point Method (IPM). Experiments on two 3D MR knee joint datasets (the Iowa dataset and iMorphics dataset) show that our new framework outperforms state-of-the-art methods on full annotation, and yields high quality results even for annotation ratios as low as 5%.<br></div>



2021 ◽  
Vol 2024 (1) ◽  
pp. 012033
Author(s):  
Songze Zhang ◽  
Benxiang Jiang ◽  
Hongjian Shi


Author(s):  
Amir Faisal ◽  
Azira Khalil ◽  
Hum Yan Chai ◽  
Khin Wee Lai


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Xiang Liu ◽  
Chao Han ◽  
He Wang ◽  
Jingyun Wu ◽  
Yingpu Cui ◽  
...  

Abstract Background Accurate segmentation of pelvic bones is an initial step to achieve accurate detection and localisation of pelvic bone metastases. This study presents a deep learning-based approach for automated segmentation of normal pelvic bony structures in multiparametric magnetic resonance imaging (mpMRI) using a 3D convolutional neural network (CNN). Methods This retrospective study included 264 pelvic mpMRI data obtained between 2018 and 2019. The manual annotations of pelvic bony structures (which included lumbar vertebra, sacrococcyx, ilium, acetabulum, femoral head, femoral neck, ischium, and pubis) on diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) images were used to create reference standards. A 3D U-Net CNN was employed for automatic pelvic bone segmentation. Additionally, 60 mpMRI data from 2020 were included and used to evaluate the model externally. Results The CNN achieved a high Dice similarity coefficient (DSC) average in both testing (0.80 [DWI images] and 0.85 [ADC images]) and external (0.79 [DWI images] and 0.84 [ADC images]) validation sets. Pelvic bone volumes measured with manual and CNN-predicted segmentations were highly correlated (R2 value of 0.84–0.97) and in close agreement (mean bias of 2.6–4.5 cm3). A SCORE system was designed to qualitatively evaluate the model for which both testing and external validation sets achieved high scores in terms of both qualitative evaluation and concordance between two readers (ICC = 0.904; 95% confidence interval: 0.871–0.929). Conclusions A deep learning-based method can achieve automated pelvic bone segmentation on DWI and ADC images with suitable quantitative and qualitative performance.



Author(s):  
Ahmed S. Maklad ◽  
Hassan Hashem ◽  
Mikio Matsuhiro ◽  
Hidenobu Suzuki ◽  
Noboru Niki


Sign in / Sign up

Export Citation Format

Share Document