Predictive Uncertainty Estimation Using Deep Learning for Soft Robot Multimodal Sensing

2021 ◽  
Vol 6 (2) ◽  
pp. 951-957
Author(s):  
Ze Yang Ding ◽  
Junn Yong Loo ◽  
Vishnu Monn Baskaran ◽  
Surya Girinatha Nurzaman ◽  
Chee Pin Tan
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 179028-179038
Author(s):  
Isibor Kennedy Ihianle ◽  
Augustine O. Nwajana ◽  
Solomon Henry Ebenuwa ◽  
Richard I. Otuka ◽  
Kayode Owa ◽  
...  

2019 ◽  
Vol 5 (1) ◽  
pp. 223-226
Author(s):  
Max-Heinrich Laves ◽  
Sontje Ihler ◽  
Tobias Ortmaier ◽  
Lüder A. Kahrs

AbstractIn this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.


Author(s):  
Zhuobin Zheng ◽  
Chun Yuan ◽  
Xinrui Zhu ◽  
Zhihui Lin ◽  
Yangyang Cheng ◽  
...  

Learning related tasks in various domains and transferring exploited knowledge to new situations is a significant challenge in Reinforcement Learning (RL). However, most RL algorithms are data inefficient and fail to generalize in complex environments, limiting their adaptability and applicability in multi-task scenarios. In this paper, we propose SelfSupervised Mixture-of-Experts (SUM), an effective algorithm driven by predictive uncertainty estimation for multitask RL. SUM utilizes a multi-head agent with shared parameters as experts to learn a series of related tasks simultaneously by Deep Deterministic Policy Gradient (DDPG). Each expert is extended by predictive uncertainty estimation on known and unknown states to enhance the Q-value evaluation capacity against overfitting and the overall generalization ability. These enable the agent to capture and diffuse the common knowledge across different tasks improving sample efficiency in each task and the effectiveness of expert scheduling across multiple tasks. Instead of task-specific design as common MoEs, a self-supervised gating network is adopted to determine a potential expert to handle each interaction from unseen environments and calibrated completely by the uncertainty feedback from the experts without explicit supervision. To alleviate the imbalanced expert utilization as the crux of MoE, optimization is accomplished via decayedmasked experience replay, which encourages both diversification and specialization of experts during different periods. We demonstrate that our approach learns faster and achieves better performance by efficient transfer and robust generalization, outperforming several related methods on extended OpenAI Gym’s MuJoCo multi-task environments.


Author(s):  
Julissa Villanueva Llerena

Tractable Deep Probabilistic Models (TPMs) are generative models based on arithmetic circuits that allow for exact marginal inference in linear time. These models have obtained promising results in several machine learning tasks. Like many other models, TPMs can produce over-confident incorrect inferences, especially on regions with small statistical support. In this work, we will develop efficient estimators of the predictive uncertainty that are robust to data scarcity and outliers. We investigate two approaches. The first approach measures the variability of the output to perturbations of the model weights. The second approach captures the variability of the prediction to changes in the model architecture. We will evaluate the approaches on challenging tasks such as image completion and multilabel classification.


2021 ◽  
pp. 108498
Author(s):  
Chen Wang ◽  
Xiang Wang ◽  
Jiawei Zhang ◽  
Liang Zhang ◽  
Xiao Bai ◽  
...  

2019 ◽  
Vol 24 (6) ◽  
pp. 4307-4322 ◽  
Author(s):  
Sergio Hernández ◽  
Diego Vergara ◽  
Matías Valdenegro-Toro ◽  
Felipe Jorquera

2020 ◽  
Vol 5 (2) ◽  
pp. 3153-3160 ◽  
Author(s):  
Antonio Loquercio ◽  
Mattia Segu ◽  
Davide Scaramuzza

2020 ◽  
Vol 60 (6) ◽  
pp. 2697-2717 ◽  
Author(s):  
Gabriele Scalia ◽  
Colin A. Grambow ◽  
Barbara Pernici ◽  
Yi-Pei Li ◽  
William H. Green

Sign in / Sign up

Export Citation Format

Share Document