scholarly journals An Annotation Sparsification Strategy for 3D Medical Image Segmentation via Representative Selection and Self-Training

2020 ◽  
Vol 34 (04) ◽  
pp. 6925-6932 ◽  
Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Chaoli Wang ◽  
Danny Z. Chen

Image segmentation is critical to lots of medical applications. While deep learning (DL) methods continue to improve performance for many medical image segmentation tasks, data annotation is a big bottleneck to DL-based segmentation because (1) DL models tend to need a large amount of labeled data to train, and (2) it is highly time-consuming and label-intensive to voxel-wise label 3D medical images. Significantly reducing annotation effort while attaining good performance of DL segmentation models remains a major challenge. In our preliminary experiments, we observe that, using partially labeled datasets, there is indeed a large performance gap with respect to using fully annotated training datasets. In this paper, we propose a new DL framework for reducing annotation effort and bridging the gap between full annotation and sparse annotation in 3D medical image segmentation. We achieve this by (i) selecting representative slices in 3D images that minimize data redundancy and save annotation effort, and (ii) self-training with pseudo-labels automatically generated from the base-models trained using the selected annotated slices. Extensive experiments using two public datasets (the HVSMR 2016 Challenge dataset and mouse piriform cortex dataset) show that our framework yields competitive segmentation results comparing with state-of-the-art DL methods using less than ∼20% of annotated data.

2020 ◽  
Vol 10 (18) ◽  
pp. 6439
Author(s):  
Chen Li ◽  
Wei Chen ◽  
Yusong Tan

Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from natural image segmentation models. These models usually ignore the importance of the boundary. To solve this difficulty, in this paper, we provided a unique perspective on rendering to explore accurate medical image segmentation. We adapt a subdivision-based point-sampling method to get high-quality boundaries. In addition, we integrated the attention mechanism and nested U-Net architecture into the proposed network Render U-Net.Render U-Net was evaluated on three public datasets, including LiTS, CHAOS, and DSB. This model obtained the best performance on five medical image segmentation tasks.


2021 ◽  
Vol 1 (1) ◽  
pp. 50-52
Author(s):  
Bo Dong ◽  
Wenhai Wang ◽  
Jinpeng Li

We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in https://github.com/dongbo811/MedAI-2021.


2019 ◽  
Vol 31 (6) ◽  
pp. 1007 ◽  
Author(s):  
Haiou Wang ◽  
Hui Liu ◽  
Qiang Guo ◽  
Kai Deng ◽  
Caiming Zhang

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


2021 ◽  
Author(s):  
Dachuan Shi ◽  
Ruiyang Liu ◽  
Linmi Tao ◽  
Zuoxiang He ◽  
Li Huo

2021 ◽  
pp. 1-19
Author(s):  
Maria Tamoor ◽  
Irfan Younas

Medical image segmentation is a key step to assist diagnosis of several diseases, and accuracy of a segmentation method is important for further treatments of different diseases. Different medical imaging modalities have different challenges such as intensity inhomogeneity, noise, low contrast, and ill-defined boundaries, which make automated segmentation a difficult task. To handle these issues, we propose a new fully automated method for medical image segmentation, which utilizes the advantages of thresholding and an active contour model. In this study, a Harris Hawks optimizer is applied to determine the optimal thresholding value, which is used to obtain the initial contour for segmentation. The obtained contour is further refined by using a spatially varying Gaussian kernel in the active contour model. The proposed method is then validated using a standard skin dataset (ISBI 2016), which consists of variable-sized lesions and different challenging artifacts, and a standard cardiac magnetic resonance dataset (ACDC, MICCAI 2017) with a wide spectrum of normal hearts, congenital heart diseases, and cardiac dysfunction. Experimental results show that the proposed method can effectively segment the region of interest and produce superior segmentation results for skin (overall Dice Score 0.90) and cardiac dataset (overall Dice Score 0.93), as compared to other state-of-the-art algorithms.


Author(s):  
Zhenzhen Yang ◽  
Pengfei Xu ◽  
Yongpeng Yang ◽  
Bing-Kun Bao

The U-Net has become the most popular structure in medical image segmentation in recent years. Although its performance for medical image segmentation is outstanding, a large number of experiments demonstrate that the classical U-Net network architecture seems to be insufficient when the size of segmentation targets changes and the imbalance happens between target and background in different forms of segmentation. To improve the U-Net network architecture, we develop a new architecture named densely connected U-Net (DenseUNet) network in this article. The proposed DenseUNet network adopts a dense block to improve the feature extraction capability and employs a multi-feature fuse block fusing feature maps of different levels to increase the accuracy of feature extraction. In addition, in view of the advantages of the cross entropy and the dice loss functions, a new loss function for the DenseUNet network is proposed to deal with the imbalance between target and background. Finally, we test the proposed DenseUNet network and compared it with the multi-resolutional U-Net (MultiResUNet) and the classic U-Net networks on three different datasets. The experimental results show that the DenseUNet network has significantly performances compared with the MultiResUNet and the classic U-Net networks.


Sign in / Sign up

Export Citation Format

Share Document