scholarly journals 2019 Kidney Tumor Segmentation Challenge: Medical Image Segmentation with Two-Stage Process

2019 ◽  
Author(s):  
Tung-I Chen ◽  
Min-Sheng Wu ◽  
Yu-Cheng Chang ◽  
Jhih-Yuan Lin
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhuqing Yang

Medical image segmentation (IS) is a research field in image processing. Deep learning methods are used to automatically segment organs, tissues, or tumor regions in medical images, which can assist doctors in diagnosing diseases. Since most IS models based on convolutional neural network (CNN) are two-dimensional models, they are not suitable for three-dimensional medical imaging. On the contrary, the three-dimensional segmentation model has problems such as complex network structure and large amount of calculation. Therefore, this study introduces the self-excited compressed dilated convolution (SECDC) module on the basis of the 3D U-Net network and proposes an improved 3D U-Net network model. In the SECDC module, the calculation amount of the model can be reduced by 1 × 1 × 1 convolution. Combining normal convolution and cavity convolution with an expansion rate of 2 can dig out the multiview features of the image. At the same time, the 3D squeeze-and-excitation (3D-SE) module can realize automatic learning of the importance of each layer. The experimental results on the BraTS2019 dataset show that the Dice coefficient and other indicators obtained by the model used in this paper indicate that the overall tumor can reach 0.87, the tumor core can reach 0.84, and the most difficult to segment enhanced tumor can reach 0.80. From the evaluation indicators, it can be analyzed that the improved 3D U-Net model used can greatly reduce the amount of data while achieving better segmentation results, and the model has better robustness. This model can meet the clinical needs of brain tumor segmentation methods.


Author(s):  
Shaohua Li ◽  
Xiuchao Sui ◽  
Xiangde Luo ◽  
Xinxing Xu ◽  
Yong Liu ◽  
...  

Medical image segmentation is important for computer-aided diagnosis. Good segmentation demands the model to see the big picture and fine details simultaneously, i.e., to learn image features that incorporate large context while keep high spatial resolutions. To approach this goal, the most widely used methods -- U-Net and variants, extract and fuse multi-scale features. However, the fused features still have small "effective receptive fields" with a focus on local image cues, limiting their performance. In this work, we propose Segtran, an alternative segmentation framework based on transformers, which have unlimited "effective receptive fields" even at high feature resolutions. The core of Segtran is a novel Squeeze-and-Expansion transformer: a squeezed attention block regularizes the self attention of transformers, and an expansion block learns diversified representations. Additionally, we propose a new positional encoding scheme for transformers, imposing a continuity inductive bias for images. Experiments were performed on 2D and 3D medical image segmentation tasks: optic disc/cup segmentation in fundus images (REFUGE'20 challenge), polyp segmentation in colonoscopy images, and brain tumor segmentation in MRI scans (BraTS'19 challenge). Compared with representative existing methods, Segtran consistently achieved the highest segmentation accuracy, and exhibited good cross-domain generalization capabilities.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


2021 ◽  
Author(s):  
Abdulla Al Suman ◽  
Shubham Sarda ◽  
Md. Asikuzzaman ◽  
Alexandra Louise Webb ◽  
M. Perriman Diana ◽  
...  

2014 ◽  
Vol 6 (1) ◽  
pp. 7-13
Author(s):  
Khoirul Umam ◽  
Fidi Wincoko Putro ◽  
Gulpi Qorik Oktagalu Pratamasunu

Segmentation on medical image requires good quality due to affect the interpretation and diagnosis of medical experts. On medical image segmentation, there is merging phase to increase the quality of the segmentation result. However, stopping criteria on merging phase was determined manually by medical experts. It implied the subjectivity of segmentation result. To increase the objectivity of segmentation result, a method to automate merging phase on medical image segmentation is required. Therefore, we propose a novel method on medical image segmentation which combine two-stage SOM and T-cluster method. Experiments were performed on dental panoramic as medical image sample and evaluated by using segmentation quality formula. Experiments show that the proposed method can perform segmentation on dental panoramic image automatically and objectively with the best average of segmentation quality value is 4,40. Index Terms—dental panoramic image, image segmentation, medical image, Self-Organizing Map, T-cluster


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shanshan Wang ◽  
Cheng Li ◽  
Rongpin Wang ◽  
Zaiyi Liu ◽  
Meiyun Wang ◽  
...  

AbstractAutomatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.


Author(s):  
Danbing Zou ◽  
Qikui Zhu ◽  
Pingkun Yan

Domain adaptation aims to alleviate the problem of retraining a pre-trained model when applying it to a different domain, which requires large amount of additional training data of the target domain. Such an objective is usually achieved by establishing connections between the source domain labels and target domain data. However, this imbalanced source-to-target one way pass may not eliminate the domain gap, which limits the performance of the pre-trained model. In this paper, we propose an innovative Dual-Scheme Fusion Network (DSFN) for unsupervised domain adaptation. By building both source-to-target and target-to-source connections, this balanced joint information flow helps reduce the domain gap to further improve the network performance. The mechanism is further applied to the inference stage, where both the original input target image and the generated source images are segmented with the proposed joint network. The results are fused to obtain more robust segmentation. Extensive experiments of unsupervised cross-modality medical image segmentation are conducted on two tasks -- brain tumor segmentation and cardiac structures segmentation. The experimental results show that our method achieved significant performance improvement over other state-of-the-art domain adaptation methods.


2020 ◽  
Vol 110 ◽  
pp. 101980
Author(s):  
Mohamed T. Bennai ◽  
Zahia Guessoum ◽  
Smaine Mazouzi ◽  
Stéphane Cormier ◽  
Mohamed Mezghiche

2021 ◽  
Vol 11 (12) ◽  
pp. 3191-3198
Author(s):  
P. Ravikumaran ◽  
K. Vimala Devi ◽  
K. Valarmathi

Automatic medical image segmentation has become increasingly important as contemporary medical imaging has become more widely available and used. Existing image segmentation solutions however lack the necessary functionality for simple medical image segmentation pipeline design. Pipelines that have already been deployed are frequently standalone software that has been optimised for a certain public data collection. As a result, the open-source python module deep-Convolutional neural network-Restricted Boltzmann Machine (deep CNNRBM) was introduced in this research work. The goal of Deep CNN-purpose RBMs is to have an easy-touse API that allows for the rapid creation of medical image segmentation transmission lines that include data augmentation, metrics, data I/O pre-processing, patch wise analysis, a library of pre-built deep neural networks, and fully automated assessment. Similarly, comprehensive pipeline customisation is possible because of strong configurability and many open interfaces. The dataset of Kidney tumor Segmentation challenge 2019 (KiTS19) acquired a strong predictor with respect to the standard 3D U-net model after cross-validation using deep CNNRBM. To that purpose, deep CNN-RBM, an expressive deep learning medical image segmentation architecture is introduced. The CNN sub-model captures frame-level spatial features automatically while the RBM submodel fuses spatial data over time to learn higher-level semantics in kidney tumor prediction. A neural network recognises medical picture segmentation, which is initiated using RBM to second-order collected data and then fine-tuned using back propagation to be more differential. According to the simulation outcome, the proposed deep CNN-RBM produced good classification results on the kidney tumour segmentation dataset.


2019 ◽  
Author(s):  
Ali Hatamizadeh ◽  
Demetri Terzopoulos ◽  
Andriy Myronenko

AbstractFully convolutional neural networks (CNNs) have proven to be effective at representing and classifying textural information, thus transforming image intensity into output class masks that achieve semantic image segmentation. In medical image analysis, however, expert manual segmentation often relies on the boundaries of anatomical structures of interest. We propose boundary aware CNNs for medical image segmentation. Our networks are designed to account for organ boundary information, both by providing a special network edge branch and edge-aware loss terms, and they are trainable end-to-end. We validate their effectiveness on the task of brain tumor segmentation using the BraTS 2018 dataset. Our experiments reveal that our approach yields more accurate segmentation results, which makes it promising for more extensive application to medical image segmentation.


Sign in / Sign up

Export Citation Format

Share Document