scholarly journals Benchmarking the Robustness of Semantic Segmentation Models with Respect to Common Corruptions

Author(s):  
Christoph Kamann ◽  
Carsten Rother

Abstract When designing a semantic segmentation model for a real-world application, such as autonomous driving, it is crucial to understand the robustness of the network with respect to a wide range of image corruptions. While there are recent robustness studies for full-image classification, we are the first to present an exhaustive study for semantic segmentation, based on many established neural network architectures. We utilize almost 400,000 images generated from the Cityscapes dataset, PASCAL VOC 2012, and ADE20K. Based on the benchmark study, we gain several new insights. Firstly, many networks perform well with respect to real-world image corruptions, such as a realistic PSF blur. Secondly, some architecture properties significantly affect robustness, such as a Dense Prediction Cell, designed to maximize performance on clean data only. Thirdly, the generalization capability of semantic segmentation models depends strongly on the type of image corruption. Models generalize well for image noise and image blur, however, not with respect to digitally corrupted data or weather corruptions.

Technologies ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 35
Author(s):  
Marco Toldo ◽  
Andrea Maracani ◽  
Umberto Michieli ◽  
Pietro Zanuttigh

The aim of this paper is to give an overview of the recent advancements in the Unsupervised Domain Adaptation (UDA) of deep networks for semantic segmentation. This task is attracting a wide interest since semantic segmentation models require a huge amount of labeled data and the lack of data fitting specific requirements is the main limitation in the deployment of these techniques. This field has been recently explored and has rapidly grown with a large number of ad-hoc approaches. This motivates us to build a comprehensive overview of the proposed methodologies and to provide a clear categorization. In this paper, we start by introducing the problem, its formulation and the various scenarios that can be considered. Then, we introduce the different levels at which adaptation strategies may be applied: namely, at the input (image) level, at the internal features representation and at the output level. Furthermore, we present a detailed overview of the literature in the field, dividing previous methods based on the following (non mutually exclusive) categories: adversarial learning, generative-based, analysis of the classifier discrepancies, self-teaching, entropy minimization, curriculum learning and multi-task learning. Novel research directions are also briefly introduced to give a hint of interesting open problems in the field. Finally, a comparison of the performance of the various methods in the widely used autonomous driving scenario is presented.


2001 ◽  
Vol 7 (S2) ◽  
pp. 522-523
Author(s):  
W. Probst ◽  
G. Benner ◽  
B. Kabius ◽  
G. Lang ◽  
S. Hiller ◽  
...  

Transmission electron microscopes have been built along with and guided by technological opportunities since the last five decades. Even though there are some “workhorse” type of microscopes, these instruments are still more or less built from the technological viewpoint and less from the viewpoint of ease of use in a wide range of applications. On the other hand, leading edge applications are the drivers for the development and the use of leading edge technology. The result then is a “race horse” which is of very limited benefit in “Real world”.During the last decade computers have been integrated to build microscope systems. in most cases, however, computers still have to deal with obsolete electron optical ray path designs and thus, have to be used more to overcome the problems of imperfect optics and bad design of ray paths than to provide optimized “Real world” capabilities.


2021 ◽  
Vol 25 (4) ◽  
pp. 993-1012
Author(s):  
Ting Da ◽  
Liang Yang

Instance segmentation has a wide range of applications, including video surveillance, autonomous driving, and behavior analysis. Nevertheless, as a type of pixel-level segmentation, its prediction performance in practice is substantially affected by low-resolution (LR) images resulting from the limitations of image acquisition equipment and poor acquisition conditions. Moreover, because their immense computational costs prevent the implementation of existing segmentation models on embedded devices, the development of a lightweight segmentation model has become an urgent necessity. However, it is challenging to achieve sound results with high efficiency and portability. From another perspective, to improve understanding of detailed objects, an architecture is needed that promotes an advanced interpretation of the segmentation, that is, a refined mask with texture. Our main contribution, called TextureMask, consists of the MobileNet-FPN for Mask R-CNN methods, segmentation with cropping, and a gradient sensitivity map, which are then merged into a unified map to refine and enrich the mask with texture information. Furthermore, preprocessing and post-processing algorithms are incorporated. Experiments demonstrated that our technique exhibits good pixel-level segmentation performance in terms of both accuracy and computational efficiency for a given LR input, and it can be easily implemented in embedded platforms.


Author(s):  
Xin Guo ◽  
Boyuan Pan ◽  
Deng Cai ◽  
Xiaofei He

Low rank matrix factorizations(LRMF) have attracted much attention due to its wide range of applications in computer vision, such as image impainting and video denoising. Most of the existing methods assume that the loss between an observed measurement matrix and its bilinear factorization follows symmetric distribution, like gaussian or gamma families. However, in real-world situations, this assumption is often found too idealized, because pictures under various illumination and angles may suffer from multi-peaks, asymmetric and irregular noises. To address these problems, this paper assumes that the loss follows a mixture of Asymmetric Laplace distributions and proposes robust Asymmetric Laplace Adaptive Matrix Factorization model(ALAMF) under bayesian matrix factorization framework. The assumption of Laplace distribution makes our model more robust and the asymmetric attribute makes our model more flexible and adaptable to real-world noise. A variational method is then devised for model inference. We compare ALAMF with other state-of-the-art matrix factorization methods both on data sets ranging from synthetic and real-world application. The experimental results demonstrate the effectiveness of our proposed approach.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Aryan Mobiny ◽  
Pengyu Yuan ◽  
Supratik K. Moulik ◽  
Naveen Garg ◽  
Carol C. Wu ◽  
...  

AbstractDeep neural networks (DNNs) have achieved state-of-the-art performance in many important domains, including medical diagnosis, security, and autonomous driving. In domains where safety is highly critical, an erroneous decision can result in serious consequences. While a perfect prediction accuracy is not always achievable, recent work on Bayesian deep networks shows that it is possible to know when DNNs are more likely to make mistakes. Knowing what DNNs do not know is desirable to increase the safety of deep learning technology in sensitive applications; Bayesian neural networks attempt to address this challenge. Traditional approaches are computationally intractable and do not scale well to large, complex neural network architectures. In this paper, we develop a theoretical framework to approximate Bayesian inference for DNNs by imposing a Bernoulli distribution on the model weights. This method called Monte Carlo DropConnect (MC-DropConnect) gives us a tool to represent the model uncertainty with little change in the overall model structure or computational cost. We extensively validate the proposed algorithm on multiple network architectures and datasets for classification and semantic segmentation tasks. We also propose new metrics to quantify uncertainty estimates. This enables an objective comparison between MC-DropConnect and prior approaches. Our empirical results demonstrate that the proposed framework yields significant improvement in both prediction accuracy and uncertainty estimation quality compared to the state of the art.


2012 ◽  
Vol 106 (3) ◽  
pp. 206-211 ◽  
Author(s):  
Laurie H. Rubel ◽  
Michael Driskill ◽  
Lawrence M. Lesser

Redistricting can provide a real-world application for use in a wide range of mathematics classrooms.


2022 ◽  
Vol 27 (3) ◽  
pp. 1-24
Author(s):  
Lang Feng ◽  
Jiayi Huang ◽  
Jeff Huang ◽  
Jiang Hu

Data-Flow Integrity (DFI) is a well-known approach to effectively detecting a wide range of software attacks. However, its real-world application has been quite limited so far because of the prohibitive performance overhead it incurs. Moreover, the overhead is enormously difficult to overcome without substantially lowering the DFI criterion. In this work, an analysis is performed to understand the main factors contributing to the overhead. Accordingly, a hardware-assisted parallel approach is proposed to tackle the overhead challenge. Simulations on SPEC CPU 2006 benchmark show that the proposed approach can completely enforce the DFI defined in the original seminal work while reducing performance overhead by 4×, on average.


Author(s):  
Hermann Blum ◽  
Paul-Edouard Sarlin ◽  
Juan Nieto ◽  
Roland Siegwart ◽  
Cesar Cadena

AbstractDeep learning has enabled impressive progress in the accuracy of semantic segmentation. Yet, the ability to estimate uncertainty and detect failure is key for safety-critical applications like autonomous driving. Existing uncertainty estimates have mostly been evaluated on simple tasks, and it is unclear whether these methods generalize to more complex scenarios. We present Fishyscapes, the first public benchmark for anomaly detection in a real-world task of semantic segmentation for urban driving. It evaluates pixel-wise uncertainty estimates towards the detection of anomalous objects. We adapt state-of-the-art methods to recent semantic segmentation models and compare uncertainty estimation approaches based on softmax confidence, Bayesian learning, density estimation, image resynthesis, as well as supervised anomaly detection methods. Our results show that anomaly detection is far from solved even for ordinary situations, while our benchmark allows measuring advancements beyond the state-of-the-art. Results, data and submission information can be found at https://fishyscapes.com/.


2012 ◽  
Author(s):  
Kelly Dyjak Leblanc ◽  
Caitlin Femac ◽  
Craig N. Shealy ◽  
Renee Staton ◽  
Lee G. Sternberger

2002 ◽  
Author(s):  
Janel H. Rogers ◽  
Heather M. Ooak ◽  
Ronald A. Moorre ◽  
M. G. Averett ◽  
Jeffrey G. Morrison

Sign in / Sign up

Export Citation Format

Share Document