scholarly journals A survey on deep learning-based Monte Carlo denoising

2021 ◽  
Vol 7 (2) ◽  
pp. 169-185
Author(s):  
Yuchi Huo ◽  
Sung-eui Yoon

AbstractMonte Carlo (MC) integration is used ubiquitously in realistic image synthesis because of its flexibility and generality. However, the integration has to balance estimator bias and variance, which causes visually distracting noise with low sample counts. Existing solutions fall into two categories, in-process sampling schemes and post-processing reconstruction schemes. This report summarizes recent trends in the post-processing reconstruction scheme. Recent years have seen increasing attention and significant progress in denoising MC rendering with deep learning, by training neural networks to reconstruct denoised rendering results from sparse MC samples. Many of these techniques show promising results in real-world applications, and this report aims to provide an assessment of these approaches for practitioners and researchers.

2014 ◽  
Vol 998-999 ◽  
pp. 806-813
Author(s):  
Jian Wang ◽  
Qing Xu

Realistic image synthesis technology is an important part in computer graphics. Monte Carlo based light simulation methods, such as Monte Carlo path tracing, can deal with complex lighting computations for complex scenes, in the field of realistic image synthesis. Unfortunately, if the samples taken for each pixel are not enough, the generated images have a lot of random noise. Adaptive sampling is attractive to reduce image noise. This paper proposes a new GH-distance based adaptive sampling algorithm. Experimental results show that the method can perform better than other similar ones.


2019 ◽  
Vol 5 (1) ◽  
pp. 223-226
Author(s):  
Max-Heinrich Laves ◽  
Sontje Ihler ◽  
Tobias Ortmaier ◽  
Lüder A. Kahrs

AbstractIn this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.


2009 ◽  
Author(s):  
Changbo Wang ◽  
Zhuopeng Zhang ◽  
Hongyan Quan ◽  
Zhangye Wang ◽  
Lin Wei

2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


1997 ◽  
Vol 23 (7) ◽  
pp. 845-859 ◽  
Author(s):  
Alan Heirich ◽  
James Arvo
Keyword(s):  

Author(s):  
Mateo Villa ◽  
Julien Bert ◽  
Antoine Valeri ◽  
Ulrike Schick ◽  
Dimitris Visvikis

Sign in / Sign up

Export Citation Format

Share Document