scholarly journals A Self-Spatial Adaptive Weighting Based U-Net for Image Segmentation

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.

Author(s):  
Hongfeng You ◽  
Long Yu ◽  
Shengwei Tian ◽  
Weiwei Cai

AbstractTo obtain more semantic information with small samples for medical image segmentation, this paper proposes a simple and efficient dual-rotation network (DR-Net) that strengthens the quality of both local and global feature maps. The key steps of the DR-Net algorithm are as follows (as shown in Fig. 1). First, the number of channels in each layer is divided into four equal portions. Then, different rotation strategies are used to obtain a rotation feature map in multiple directions for each subimage. Then, the multiscale volume product and dilated convolution are used to learn the local and global features of feature maps. Finally, the residual strategy and integration strategy are used to fuse the generated feature maps. Experimental results demonstrate that the DR-Net method can obtain higher segmentation accuracy on both the CHAOS and BraTS data sets compared to the state-of-the-art methods.


Author(s):  
Zhenzhen Yang ◽  
Pengfei Xu ◽  
Yongpeng Yang ◽  
Bing-Kun Bao

The U-Net has become the most popular structure in medical image segmentation in recent years. Although its performance for medical image segmentation is outstanding, a large number of experiments demonstrate that the classical U-Net network architecture seems to be insufficient when the size of segmentation targets changes and the imbalance happens between target and background in different forms of segmentation. To improve the U-Net network architecture, we develop a new architecture named densely connected U-Net (DenseUNet) network in this article. The proposed DenseUNet network adopts a dense block to improve the feature extraction capability and employs a multi-feature fuse block fusing feature maps of different levels to increase the accuracy of feature extraction. In addition, in view of the advantages of the cross entropy and the dice loss functions, a new loss function for the DenseUNet network is proposed to deal with the imbalance between target and background. Finally, we test the proposed DenseUNet network and compared it with the multi-resolutional U-Net (MultiResUNet) and the classic U-Net networks on three different datasets. The experimental results show that the DenseUNet network has significantly performances compared with the MultiResUNet and the classic U-Net networks.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


2021 ◽  
Vol 13 (3) ◽  
pp. 1224
Author(s):  
Xiangbin Liu ◽  
Liping Song ◽  
Shuai Liu ◽  
Yudong Zhang

As an emerging biomedical image processing technology, medical image segmentation has made great contributions to sustainable medical care. Now it has become an important research direction in the field of computer vision. With the rapid development of deep learning, medical image processing based on deep convolutional neural networks has become a research hotspot. This paper focuses on the research of medical image segmentation based on deep learning. First, the basic ideas and characteristics of medical image segmentation based on deep learning are introduced. By explaining its research status and summarizing the three main methods of medical image segmentation and their own limitations, the future development direction is expanded. Based on the discussion of different pathological tissues and organs, the specificity between them and their classic segmentation algorithms are summarized. Despite the great achievements of medical image segmentation in recent years, medical image segmentation based on deep learning has still encountered difficulties in research. For example, the segmentation accuracy is not high, the number of medical images in the data set is small and the resolution is low. The inaccurate segmentation results are unable to meet the actual clinical requirements. Aiming at the above problems, a comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.


Author(s):  
Afifa Khaled ◽  
Jian-Jun Han

Image segmentation is a new challenge prob- lem in medical application. The use of medical imaging has become an integral part of research, as it allows us to see inside the human body without surgical intervention. Many researcher have studied brain segmentation. One stage method is used to segment the brain tissues. In this paper, we proposed the multi-stage generative ad- versarial network to solve the problem of information loss in the one-stage. We utilize the coarse-to-fine to improve brain segmentation using multi-stage generative adversar- ial networks (GAN). In the first stage, our model generated a coarse outline for (i) background and (ii) brain tissues. Then, in the second stage, the model generated outline for (i) white matter (WM), (ii) gray matter (GM) and (iii) cerebrospinal fluid (CSF). A good result can be achieved by fusing the coarse outline and refine outline. We conclude that our model is more efficient and accu- rate in practice for both infant and adult brain segmenta- tion. Moreover, we observe that multi-stage model is faster than prior models. To be more specific, the main goal of multi-stage model is to see the performance of the model in a few shot learning case where a few labeled data are available. For medical image, this proposed model can work in a wide range of image segmentation where the convolution neural networks and one-stage methods have failed.


Author(s):  
Afifa Khaled ◽  
Jian-Jun Han

Image segmentation is a new challenge prob- lem in medical application. The use of medical imaging has become an integral part of research, as it allows us to see inside the human body without surgical intervention. Many researcher have studied brain segmentation. One stage method is used to segment the brain tissues. In this paper, we proposed the multi-stage generative ad- versarial network to solve the problem of information loss in the one-stage. We utilize the coarse-to-fine to improve brain segmentation using multi-stage generative adversar- ial networks (GAN). In the first stage, our model generated a coarse outline for (i) background and (ii) brain tissues. Then, in the second stage, the model generated outline for (i) white matter (WM), (ii) gray matter (GM) and (iii) cerebrospinal fluid (CSF). A good result can be achieved by fusing the coarse outline and refine outline. We conclude that our model is more efficient and accu- rate in practice for both infant and adult brain segmenta- tion. Moreover, we observe that multi-stage model is faster than prior models. To be more specific, the main goal of multi-stage model is to see the performance of the model in a few shot learning case where a few labeled data are available. For medical image, this proposed model can work in a wide range of image segmentation where the convolution neural networks and one-stage methods have failed.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shanshan Wang ◽  
Cheng Li ◽  
Rongpin Wang ◽  
Zaiyi Liu ◽  
Meiyun Wang ◽  
...  

AbstractAutomatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.


2022 ◽  
Author(s):  
Erqiang Deng ◽  
Zhiguang Qin ◽  
Dajiang Chen ◽  
Zhen Qin ◽  
Yi Ding ◽  
...  

Abstract Deep learning has been widely used in medical image segmentation, although the accuracy is affected by the problems of small sample space, data imbalance, and cross-device differences. Aiming at such issues, a enhancement GAN network is proposed by using the domain transferring of the adversarial generation network to enhance the original medical images. Specifically, based on retaining the transferability of the original GAN network, a new optimizer is added to generate a sample space with a continuous distribution, which can be used as the target domain of the original image transferring. The optimizer back-propagates the labels of the supervised data set through the segmentation network and maps the discrete distribution of the labels to the continuous image distribution, which has a high similarity to the original image but improves the segmentation efficiency.On this basis, the optimized distribution is taken as the target domain, and the generator and discriminator of the GAN network are trained so that the generator can transfer the original image distribution to the target distribution. extensive experiments are conducted based on MRI, CT, and ultrasound data sets. The experimental results show that, the proposed method has a good generalization effect in medical image segmentation, even when the data set has limited sample space and data imbalance to a certain extent.


2020 ◽  
pp. 74-81
Author(s):  
Anandakumar Haldorai ◽  
Shrinand Anandakumar

– Medical image segmentation is considered the most precarious element in the analysis and processing of real-life images in the clinical sector. Actually, segmentation effects affect the subsequent procedures of image evaluation, life object illustration and description, feature dimensions and the subsequent considerable task levels such as object categorization. In that case, medical image segmentation is the most essential and crucial aspect for aiding the visualization, delineation and depiction of regions of interest for any particular image. The aspect of physical segmentation of a picture is not just challenging and time-consuming task to do, but also problematic to assure accuracy considering the wide-range image modalities and unguided image quantities which have to be observed. In that case, it now becomes fundamental to evaluate the present approaches of image segmentation based on the application of computing algorithms which require the interaction of users with evaluating medical images. In the process of image segmentation, an anatomic classification requires to be extracted or defined to be projected effectively and independently. In that case, this contribution is focused on image segmentation to extract details for decision-making in the clinical sector. The paper presents generalized and relative techniques that have been categorized into three groups: pixel-centre, edge-centred and region-centred techniques. The paper also provides a highlight of the strengths and weaknesses of these techniques in reference to the appropriateness of a wide-range application in medical image segmentation.


Sign in / Sign up

Export Citation Format

Share Document