Cross-validation of bias-corrected climate simulations is misleading
Abstract. We demonstrate both analytically and with a modelling example that cross-validation of free running bias-corrected climate change simulations against observations is misleading. The underlying reasoning is as follows: a cross-validation can have in principle two outcomes. A negative (in the sense of not rejecting a Null hypothesis), if the residual bias in the validation period after bias correction vanishes; and a positive, if the residual bias in the validation period after bias correction is large. It can be shown analytically that the residual bias depends solely on the difference between the simulated and observed change between calibration and validation period. These changes, however, depend mainly on the realisations of internal variability in the observations and climate model. As a consequence, also the outcome of a cross-validation is dominated by internal variability, and does not allow for any conclusion about the sensibility of a bias correction. In particular, a sensible bias correction may be rejected (false positive) and a non-sensible bias correction may be accepted (false negative). We therefore propose to avoid cross-validation when evaluating bias correction of free running bias-corrected climate change simulations against observations. Instead, one should evaluate temporal, spatial and process-based aspects.