scholarly journals SD-UNet: Stripping down U-Net for Segmentation of Biomedical Images on Platforms with Low Computational Budgets

Diagnostics ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 110 ◽  
Author(s):  
Pius Kwao Gadosey ◽  
Yujian Li ◽  
Enock Adjei Agyekum ◽  
Ting Zhang ◽  
Zhaoying Liu ◽  
...  

During image segmentation tasks in computer vision, achieving high accuracy performance while requiring fewer computations and faster inference is a big challenge. This is especially important in medical imaging tasks but one metric is usually compromised for the other. To address this problem, this paper presents an extremely fast, small and computationally effective deep neural network called Stripped-Down UNet (SD-UNet), designed for the segmentation of biomedical data on devices with limited computational resources. By making use of depthwise separable convolutions in the entire network, we design a lightweight deep convolutional neural network architecture inspired by the widely adapted U-Net model. In order to recover the expected performance degradation in the process, we introduce a weight standardization algorithm with the group normalization method. We demonstrate that SD-UNet has three major advantages including: (i) smaller model size (23x smaller than U-Net); (ii) 8x fewer parameters; and (iii) faster inference time with a computational complexity lower than 8M floating point operations (FLOPs). Experiments on the benchmark dataset of the Internatioanl Symposium on Biomedical Imaging (ISBI) challenge for segmentation of neuronal structures in electron microscopic (EM) stacks and the Medical Segmentation Decathlon (MSD) challenge brain tumor segmentation (BRATs) dataset show that the proposed model achieves comparable and sometimes better results compared to the current state-of-the-art.

2021 ◽  
Vol 7 (12) ◽  
pp. 269
Author(s):  
Ilyasse Aboussaleh ◽  
Jamal Riffi ◽  
Adnane Mohamed Mahraz ◽  
Hamid Tairi

Brain tumor is considered as one of the most serious causes of death in the world. Thus, it is very important to detect it as early as possible. In order to predict and segment the tumor, many approaches have been proposed. However, they suffer from different problems such as the necessity of the intervention of a specialist, the long required run-time and the choice of the appropriate feature extractor. To address these issues, we proposed an approach based on convolution neural network architecture aiming at predicting and segmenting simultaneously a cerebral tumor. The proposal was divided into two phases. Firstly, aiming at avoiding the use of the labeled image that implies a subject intervention of the specialist, we used a simple binary annotation that reflects the existence of the tumor or not. Secondly, the prepared image data were fed into our deep learning model in which the final classification was obtained; if the classification indicated the existence of the tumor, the brain tumor was segmented based on the feature representations generated by the convolutional neural network architectures. The proposed method was trained on the BraTS 2017 dataset with different types of gliomas. The achieved results show the performance of the proposed approach in terms of accuracy, precision, recall and Dice similarity coefficient. Our model showed an accuracy of 91% in tumor classification and a Dice similarity coefficient of 82.35% in tumor segmentation.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 320
Author(s):  
Yue Zhao ◽  
Xiaoqiang Ren ◽  
Kun Hou ◽  
Wentao Li

Automated brain tumor segmentation based on 3D magnetic resonance imaging (MRI) is critical to disease diagnosis. Moreover, robust and accurate achieving automatic extraction of brain tumor is a big challenge because of the inherent heterogeneity of the tumor structure. In this paper, we present an efficient semantic segmentation 3D recurrent multi-fiber network (RMFNet), which is based on encoder–decoder architecture to segment the brain tumor accurately. 3D RMFNet is applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. First of all, we propose that recurrent units segment brain tumors by connecting recurrent units and convolutional layers. This quality enhances the model’s ability to integrate contextual information and is of great significance to enhance the contextual information. Then, a 3D multi-fiber unit is added to the overall network to solve the high computational cost caused by the use of a 3D network architecture to capture local features. 3D RMFNet combines both advantages from a 3D recurrent unit and 3D multi-fiber unit. Extensive experiments on the Brain Tumor Segmentation (BraTS) 2018 challenge dataset show that our RMFNet remarkably outperforms state-of-the-art methods, and achieves average Dice scores of 89.62%, 83.65% and 78.72% for the whole tumor, tumor core and enhancing tumor, respectively. The experimental results prove our architecture to be an efficient tool for brain tumor segmentation accurately.


This paper presents brain tumor detection and segmentation using image processing techniques. Convolutional neural networks can be applied for medical research in brain tumor analysis. The tumor in the MRI scans is segmented using the K-means clustering algorithm which is applied of every scan and the feed it to the convolutional neural network for training and testing. In our CNN we propose to use ReLU and Sigmoid activation functions to determine our end result. The training is done only using the CPU power and no GPU is used. The research is done in two phases, image processing and applying neural network.


Author(s):  
Nishanth Krishnaraj ◽  
A. Mary Mekala ◽  
Bhaskar M. ◽  
Ruban Nersisson ◽  
Alex Noel Joseph Raj

Early prediction of cancer type has become very crucial. Breast cancer is common to women and it leads to life threatening. Several imaging techniques have been suggested for timely detection and treatment of breast cancer. More research findings have been done to accurately detect the breast cancer. Automated whole breast ultrasound (AWBUS) is a new breast imaging technology that can render the entire breast anatomy in 3-D volume. The tissue layers in the breast are segmented and the type of lesion in the breast tissue can be identified which is essential for cancer detection. In this chapter, a u-net convolutional neural network architecture is used to implement the segmentation of breast tissues from AWBUS images into the different layers, that is, epidermis, subcutaneous, and muscular layer. The architecture was trained and tested with the AWBUS dataset images. The performance of the proposed scheme was based on accuracy, loss and the F1 score of the neural network that was calculated for each layer of the breast tissue.


Sign in / Sign up

Export Citation Format

Share Document