bias variance
Recently Published Documents


TOTAL DOCUMENTS

190
(FIVE YEARS 57)

H-INDEX

21
(FIVE YEARS 2)

2022 ◽  
Vol 59 (1) ◽  
pp. 102747
Author(s):  
Peng Zhang ◽  
Hui Gao ◽  
Zeting Hu ◽  
Meng Yang ◽  
Dawei Song ◽  
...  

Cell Reports ◽  
2021 ◽  
Vol 37 (13) ◽  
pp. 110185
Author(s):  
Dongjae Kim ◽  
Jaeseung Jeong ◽  
Sang Wan Lee

Author(s):  
Leah F. South ◽  
Marina Riabiz ◽  
Onur Teymur ◽  
Chris J. Oates

Markov chain Monte Carlo is the engine of modern Bayesian statistics, being used to approximate the posterior and derived quantities of interest. Despite this, the issue of how the output from a Markov chain is postprocessed and reported is often overlooked. Convergence diagnostics can be used to control bias via burn-in removal, but these do not account for (common) situations where a limited computational budget engenders a bias-variance trade-off. The aim of this article is to review state-of-the-art techniques for postprocessing Markov chain output. Our review covers methods based on discrepancy minimization, which directly address the bias-variance trade-off, as well as general-purpose control variate methods for approximating expected quantities of interest. Expected final online publication date for the Annual Review of Statistics and Its Application, Volume 9 is March 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2021 ◽  
Author(s):  
Xinjie Lan ◽  
Bin Zhu ◽  
Charles Boncelet ◽  
Kenneth Barner

2021 ◽  
Author(s):  
Georgii Novikov ◽  
Maxim Panov ◽  
Ivan Oseledets

Author(s):  
Xiao Zhang ◽  
Haoyi Xiong ◽  
Dongrui Wu

Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can achieve excellent generalization performance, challenging the bias-variance trade-off in classical learning theory. Recent studies claimed that DNNs first learn simple patterns and then memorize noise; some other works showed a phenomenon that DNNs have a spectral bias to learn target functions from low to high frequencies during training. However, we show that the monotonicity of the learning bias does not always hold: under the experimental setup of deep double descent, the high-frequency components of DNNs diminish in the late stage of training, leading to the second descent of the test error. Besides, we find that the spectrum of DNNs can be applied to indicating the second descent of the test error, even though it is calculated from the training set only.


2021 ◽  
Vol 9 ◽  
Author(s):  
Alexandre Dunant

This paper presents a generalization of the bias-variance tradeoff applied to the recent trend toward natural multi-hazard risk assessment. The bias-variance dilemma, a well-known machine learning theory, is presented in the context of natural hazard modeling. It is then argued that the bias-variance statistical concept can provide an analytical framework for the necessity to direct efforts toward systemic risk assessment using multi-hazard catastrophe modeling and inform future mitigation practices.


Sign in / Sign up

Export Citation Format

Share Document