scholarly journals KCB-Net: A 3D Knee Cartilage and Bone Segmentation Network via Sparse Annotation

Author(s):  
Yaopeng Peng ◽  
Hao Zheng ◽  
Fahim Zaman ◽  
Lichun Zhang ◽  
Xiaodong Wu ◽  
...  

<div>Knee cartilage and bone segmentation is critical for physicians to analyze and diagnose articular damage and knee osteoarthritis (OA). Deep learning (DL) methods for medical image segmentation have largely outperformed traditional methods, but they often need large amounts of annotated data for model training, which is very costly and time-consuming for medical experts, especially on 3D images. In this paper, we report a new knee cartilage and bone segmentation framework, KCB-Net, for 3D MR images based on sparse annotation. KCB-Net selects a small subset of slices from 3D images for annotation, and seeks to bridge the performance gap between sparse annotation and full annotation. Specifically, it first identifies a subset of the most effective and representative slices with an unsupervised scheme; it then trains an ensemble model using the annotated slices; next, it self-trains the model using 3D images containing pseudo-labels generated by the ensemble method and improved by a bi-directional hierarchical earth mover’s distance (bi-HEMD) algorithm; finally, it fine-tunes the segmentation results using the primal-dual Internal Point Method (IPM). Experiments on two 3D MR knee joint datasets (the Iowa dataset and iMorphics dataset) show that our new framework outperforms state-of-the-art methods on full annotation, and yields high quality results even for annotation ratios as low as 5%.<br></div>

2021 ◽  
Author(s):  
Yaopeng Peng ◽  
Hao Zheng ◽  
Fahim Zaman ◽  
Lichun Zhang ◽  
Xiaodong Wu ◽  
...  

<div>Knee cartilage and bone segmentation is critical for physicians to analyze and diagnose articular damage and knee osteoarthritis (OA). Deep learning (DL) methods for medical image segmentation have largely outperformed traditional methods, but they often need large amounts of annotated data for model training, which is very costly and time-consuming for medical experts, especially on 3D images. In this paper, we report a new knee cartilage and bone segmentation framework, KCB-Net, for 3D MR images based on sparse annotation. KCB-Net selects a small subset of slices from 3D images for annotation, and seeks to bridge the performance gap between sparse annotation and full annotation. Specifically, it first identifies a subset of the most effective and representative slices with an unsupervised scheme; it then trains an ensemble model using the annotated slices; next, it self-trains the model using 3D images containing pseudo-labels generated by the ensemble method and improved by a bi-directional hierarchical earth mover’s distance (bi-HEMD) algorithm; finally, it fine-tunes the segmentation results using the primal-dual Internal Point Method (IPM). Experiments on two 3D MR knee joint datasets (the Iowa dataset and iMorphics dataset) show that our new framework outperforms state-of-the-art methods on full annotation, and yields high quality results even for annotation ratios as low as 5%.<br></div>


1982 ◽  
Vol 164 (2) ◽  
pp. 157-172 ◽  
Author(s):  
Ida J. Stockman ◽  
Fay Boyd Vaughn-Cooke

A review of the literature on the language of working-class Black children revealed that only a small subset of this research has focused on the acquisition and development of linguistic knowledge. Consequently major gaps exist in what is known about the linguistic abilities of these children. In an attempt to narrow these gaps, a team of researchers has initiated a large scale longitudinal and cross-sectional study of the acquisition of language by working-class Black children at the Center for Applied Linguistics in Washington, D. C. This paper will present a detailed description of the Center study. It will also critically evaluate existing research on the linguistic abilities of working-class Black children and its impact on language acquisition studies focusing on this population. The evaluation reveals that a new framework for analyzing the language of working-class Black children should be selected. The theory and methodology of the new framework are described.


2011 ◽  
Vol 115 (12) ◽  
pp. 1710-1720 ◽  
Author(s):  
Soochahn Lee ◽  
Sang Hyun Park ◽  
Hackjoon Shim ◽  
Il Dong Yun ◽  
Sang Uk Lee
Keyword(s):  

2020 ◽  
Vol 34 (04) ◽  
pp. 6925-6932 ◽  
Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Chaoli Wang ◽  
Danny Z. Chen

Image segmentation is critical to lots of medical applications. While deep learning (DL) methods continue to improve performance for many medical image segmentation tasks, data annotation is a big bottleneck to DL-based segmentation because (1) DL models tend to need a large amount of labeled data to train, and (2) it is highly time-consuming and label-intensive to voxel-wise label 3D medical images. Significantly reducing annotation effort while attaining good performance of DL segmentation models remains a major challenge. In our preliminary experiments, we observe that, using partially labeled datasets, there is indeed a large performance gap with respect to using fully annotated training datasets. In this paper, we propose a new DL framework for reducing annotation effort and bridging the gap between full annotation and sparse annotation in 3D medical image segmentation. We achieve this by (i) selecting representative slices in 3D images that minimize data redundancy and save annotation effort, and (ii) self-training with pseudo-labels automatically generated from the base-models trained using the selected annotated slices. Extensive experiments using two public datasets (the HVSMR 2016 Challenge dataset and mouse piriform cortex dataset) show that our framework yields competitive segmentation results comparing with state-of-the-art DL methods using less than ∼20% of annotated data.


2016 ◽  
Vol 125 (1) ◽  
pp. 46-52 ◽  
Author(s):  
Binsheng You ◽  
Yanhao Cheng ◽  
Jian Zhang ◽  
Qimin Song ◽  
Chao Dai ◽  
...  

OBJECT The goal of this study was to investigate the significance of contrast-enhanced T1-weighted (T1W) MRI-based 3D reconstruction of dural tail sign (DTS) in meningioma resection. METHODS Between May 2013 and August 2014, 18 cases of convexity and parasagittal meningiomas showing DTS on contrast-enhanced T1W MRI were selected. Contrast-enhanced T1W MRI-based 3D reconstruction of DTS was conducted before surgical treatment. The vertical and anteroposterior diameters of DTS on the contrast-enhanced T1W MR images and 3D reconstruction images were measured and compared. Surgical incisions were designed by referring to the 3D reconstruction and MR images, and then the efficiency of the 2 methods was evaluated with assistance of neuronavigation. RESULTS Three-dimensional reconstruction of DTS can reveal its overall picture. In most cases, the DTS around the tumor is uneven, whereas the DTS around the dural vessels presents longer extensions. There was no significant difference (p > 0.05) between the vertical and anteroposterior diameters of DTS measured on the contrast-enhanced T1W MR and 3D reconstruction images. The 3D images of DTS were more intuitive, and the overall picture of DTS could be revealed in 1 image, which made it easier to design the incision than by using the MR images. Meanwhile, assessment showed that the incisions designed using 3D images were more accurate than those designed using MR images (ridit analysis by SAS, F = 7.95; p = 0.008). Pathological examination showed that 34 dural specimens (except 2 specimens from 1 tumor) displayed tumor invasion. The distance of tumor cell invasion was 1.0–21.6 mm (5.4 ± 4.41 mm [mean ± SD]). Tumor cell invasion was not observed at the dural resection margin in all 36 specimens. CONCLUSIONS Contrast-enhanced T1W MRI-based 3D reconstruction can intuitively and accurately reveal the size and shape of DTS, and thus provides guidance for designing meningioma incisions.


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
BHARATH BALAJI R ◽  
PRADEPP K V

The segmentation, identification, and extraction of contaminated tumour regions from magnetic resonance (MR) images is a serious problem, but it is a time-consuming and labor-intensive operation carried out by radiologists or clinical experts, whose accuracy is totally reliant on their knowledge. As a consequence, using computer-assisted technologies to circumvent these limits becomes more vital. In this study, we looked into Berkeley wavelet transformation (BWT) based brain tumour segmentation to improve performance and reduce the complexity of the medical image segmentation process. Furthermore, relevant properties are extracted from each segmented tissue to improve the support vector machine (SVM) based classifier's accuracy and quality rate. The experimental results of the recommended technique have been examined and validated for performance and quality analysis on magnetic resonance brain pictures based on accuracy, sensitivity, specificity, and dice similarity index coefficient. With 96.51 percent accuracy, 94.2 percent specificity, and 97.72 percent sensitivity, the recommended technique for discriminating normal and diseased tissues from brain MR images was shown to be effective. The results of the testing revealed an average dice similarity index coefficient of 0.82, showing that the automated (machine) extracted tumour area coincided with the manually determined tumour region by radiologists. The simulation results show the relevance of quality parameters and accuracy when compared to state-of-the-art approaches. The main objective is to develop a smartphone app for identifying brain tumours.


2017 ◽  
Vol 88 ◽  
pp. 110-125 ◽  
Author(s):  
Akash Gandhamal ◽  
Sanjay Talbar ◽  
Suhas Gajre ◽  
Ruslan Razak ◽  
Ahmad Fadzil M. Hani ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document