scholarly journals Semantic Segmentation from Sparse Labeling Using Multi-Level Superpixels

Author(s):  
Inigo Alonso ◽  
Ana C. Murillo
Author(s):  
Yizhen Chen ◽  
Haifeng Hu

Most existing segmentation networks are built upon a “ U -shaped” encoder–decoder structure, where the multi-level features extracted by the encoder are gradually aggregated by the decoder. Although this structure has been proven to be effective in improving segmentation performance, there are two main drawbacks. On the one hand, the introduction of low-level features brings a significant increase in calculations without an obvious performance gain. On the other hand, general strategies of feature aggregation such as addition and concatenation fuse features without considering the usefulness of each feature vector, which mixes the useful information with massive noises. In this article, we abandon the traditional “ U -shaped” architecture and propose Y-Net, a dual-branch joint network for accurate semantic segmentation. Specifically, it only aggregates the high-level features with low-resolution and utilizes the global context guidance generated by the first branch to refine the second branch. The dual branches are effectively connected through a Semantic Enhancing Module, which can be regarded as the combination of spatial attention and channel attention. We also design a novel Channel-Selective Decoder (CSD) to adaptively integrate features from different receptive fields by assigning specific channelwise weights, where the weights are input-dependent. Our Y-Net is capable of breaking through the limit of singe-branch network and attaining higher performance with less computational cost than “ U -shaped” structure. The proposed CSD can better integrate useful information and suppress interference noises. Comprehensive experiments are carried out on three public datasets to evaluate the effectiveness of our method. Eventually, our Y-Net achieves state-of-the-art performance on PASCAL VOC 2012, PASCAL Person-Part, and ADE20K dataset without pre-training on extra datasets.


Author(s):  
Poonam Fauzdar ◽  
Sarvesh Kumar

In this paper we applianced an approach for segmenting brain tumour regions in a computed tomography images by proposing a multi-level fuzzy technique with quantization and minimum computed Euclidean distance applied to morphologically divided skull part. Since the edges identified with closed contours and further improved by adding minimum Euclidean distance, that is why the numerous results that are analyzed are very assuring and algorithm poses following advantages like less cost, global analysis of image, reduced time, more specificity and positive predictive value.


Author(s):  
Íñigo Alonso ◽  
Ana Cristina Murillo Arnal

This work proposes and validates a simple but effective approach to train dense semantic segmentation models from sparsely labeled data. Data and labeling collection is most costly task of semantic segmentation. Our approach needs only a few pixels per image reducing the human interaction required.    


2021 ◽  
pp. 108384
Author(s):  
Jiaxing Huang ◽  
Dayan Guan ◽  
Aoran Xiao ◽  
Shijian Lu

Author(s):  
Tong Shen ◽  
Guosheng Lin ◽  
Chunhua Shen ◽  
Ian Reid

Semantic image segmentation is a fundamental task in image understanding. Per-pixel semantic labelling of an image benefits greatly from the ability to consider region consistency both locally and globally. However, many Fully Convolutional Network based methods do not impose such consistency, which may give rise to noisy and implausible predictions. We address this issue by proposing a dense multi-label network module that is able to encourage the region consistency at different levels. This simple but effective module can be easily integrated into any semantic segmentation systems. With comprehensive experiments, we show that the dense multi-label can successfully remove the implausible labels and clear the confusion so as to boost the performance of semantic segmentation systems.


2020 ◽  
Vol 393 ◽  
pp. 54-65 ◽  
Author(s):  
Boxiang Zhang ◽  
Wenhui Li ◽  
Yuming Hui ◽  
Jiayun Liu ◽  
Yuanyuan Guan

Sign in / Sign up

Export Citation Format

Share Document