scholarly journals More Birds in the Hand -Medical Image Segmentation using a Multi-Model Ensemble Framework

2021 ◽  
Vol 1 (1) ◽  
pp. 23-25
Author(s):  
Yung-Han Chen ◽  
Pei-Hsuan Kuo ◽  
Yi-Zeng Fang ◽  
Wei-Lin Wang

In this paper, we introduce a multi-model ensemble framework for medical image segmentation. We first collect a set of state-of-the-art models in this field and further improve them through a series of architecture refinement moves and a set of specific training skills. We then integrate these fine-tuned models into a more powerful ensemble framework. Preliminary experiment results show that the proposed multi-model ensemble framework performs well under the given polyp and instrument datasets.

Author(s):  
Cheng Chen ◽  
Qi Dou ◽  
Hao Chen ◽  
Jing Qin ◽  
Pheng-Ann Heng

This paper presents a novel unsupervised domain adaptation framework, called Synergistic Image and Feature Adaptation (SIFA), to effectively tackle the problem of domain shift. Domain adaptation has become an important and hot topic in recent studies on deep learning, aiming to recover performance degradation when applying the neural networks to new testing domains. Our proposed SIFA is an elegant learning diagram which presents synergistic fusion of adaptations from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features towards the segmentation task. The feature encoder layers are shared by both perspectives to grasp their mutual benefits during the end-to-end learning procedure. Without using any annotation from the target domain, the learning of our unified model is guided by adversarial losses, with multiple discriminators employed from various aspects. We have extensively validated our method with a challenging application of crossmodality medical image segmentation of cardiac structures. Experimental results demonstrate that our SIFA model recovers the degraded performance from 17.2% to 73.0%, and outperforms the state-of-the-art methods by a significant margin.


2021 ◽  
Vol 7 (2) ◽  
pp. 35
Author(s):  
Boris Shirokikh ◽  
Alexey Shevtsov ◽  
Alexandra Dalechina ◽  
Egor Krivov ◽  
Valery Kostjuchenko ◽  
...  

The prevailing approach for three-dimensional (3D) medical image segmentation is to use convolutional networks. Recently, deep learning methods have achieved human-level performance in several important applied problems, such as volumetry for lung-cancer diagnosis or delineation for radiation therapy planning. However, state-of-the-art architectures, such as U-Net and DeepMedic, are computationally heavy and require workstations accelerated with graphics processing units for fast inference. However, scarce research has been conducted concerning enabling fast central processing unit computations for such networks. Our paper fills this gap. We propose a new segmentation method with a human-like technique to segment a 3D study. First, we analyze the image at a small scale to identify areas of interest and then process only relevant feature-map patches. Our method not only reduces the inference time from 10 min to 15 s but also preserves state-of-the-art segmentation quality, as we illustrate in the set of experiments with two large datasets.


2021 ◽  
Vol 1 (1) ◽  
pp. 50-52
Author(s):  
Bo Dong ◽  
Wenhai Wang ◽  
Jinpeng Li

We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in https://github.com/dongbo811/MedAI-2021.


Author(s):  
Juanjuan He ◽  
Song Xiang ◽  
Ziqi Zhu

In standard U-net, researchers only use long skip connections to skip features from the encoding path to the decoding path in order to recover spatial information loss during downsampling. However, it would result in gradient vanishing and limit the depth of the network. To address this issue, we propose a novel deep fully residual convolutional neural network that combines the U-net with the ResNet for medical image segmentation. By applying short skip connections, this new extension of U-net decreases the amount of parameters compared to the standard U-net, although the depth of the layer is increased. We evaluate the performance of the proposed model and other state-of-the-art models on the Electron Microscopy (EM) images dataset and the Computed Tomography (CT) images dataset. The result shows that our model achieves competitive accuracy on the EM benchmark without any further post-process. Moreover, the performance of image segmentation on CT images of the lungs is improved in contrast to the standard U-net.


2019 ◽  
Vol 31 (6) ◽  
pp. 1007 ◽  
Author(s):  
Haiou Wang ◽  
Hui Liu ◽  
Qiang Guo ◽  
Kai Deng ◽  
Caiming Zhang

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


Sign in / Sign up

Export Citation Format

Share Document