scholarly journals Climate field reconstruction uncertainty arising from multivariate and nonlinear properties of predictors

2014 ◽  
Vol 41 (24) ◽  
pp. 9127-9134 ◽  
Author(s):  
M. N. Evans ◽  
J. E. Smerdon ◽  
A. Kaplan ◽  
S. E. Tolwinski-Ward ◽  
J. F. González-Rouco
2014 ◽  
Vol 10 (1) ◽  
pp. 1-19 ◽  
Author(s):  
J. Wang ◽  
J. Emile-Geay ◽  
D. Guillot ◽  
J. E. Smerdon ◽  
B. Rajaratnam

Abstract. Pseudoproxy experiments (PPEs) have become an important framework for evaluating paleoclimate reconstruction methods. Most existing PPE studies assume constant proxy availability through time and uniform proxy quality across the pseudoproxy network. Real multiproxy networks are, however, marked by pronounced disparities in proxy quality, and a steep decline in proxy availability back in time, either of which may have large effects on reconstruction skill. A suite of PPEs constructed from a millennium-length general circulation model (GCM) simulation is thus designed to mimic these various real-world characteristics. The new pseudoproxy network is used to evaluate four climate field reconstruction (CFR) techniques: truncated total least squares embedded within the regularized EM (expectation-maximization) algorithm (RegEM-TTLS), the Mann et al. (2009) implementation of RegEM-TTLS (M09), canonical correlation analysis (CCA), and Gaussian graphical models embedded within RegEM (GraphEM). Each method's risk properties are also assessed via a 100-member noise ensemble. Contrary to expectation, it is found that reconstruction skill does not vary monotonically with proxy availability, but also is a function of the type and amplitude of climate variability (forced events vs. internal variability). The use of realistic spatiotemporal pseudoproxy characteristics also exposes large inter-method differences. Despite the comparable fidelity in reconstructing the global mean temperature, spatial skill varies considerably between CFR techniques. Both GraphEM and CCA efficiently exploit teleconnections, and produce consistent reconstructions across the ensemble. RegEM-TTLS and M09 appear advantageous for reconstructions on highly noisy data, but are subject to larger stochastic variations across different realizations of pseudoproxy noise. Results collectively highlight the importance of designing realistic pseudoproxy networks and implementing multiple noise realizations of PPEs. The results also underscore the difficulty in finding the proper bias-variance tradeoff for jointly optimizing the spatial skill of CFRs and the fidelity of the global mean reconstructions.


2007 ◽  
Vol 112 (D12) ◽  
Author(s):  
Michael E. Mann ◽  
Scott Rutherford ◽  
Eugene Wahl ◽  
Caspar Ammann

2003 ◽  
Vol 16 (3) ◽  
pp. 462-479 ◽  
Author(s):  
S. Rutherford ◽  
M. E. Mann ◽  
T. L. Delworth ◽  
R. J. Stouffer

2013 ◽  
Vol 9 (3) ◽  
pp. 3015-3060 ◽  
Author(s):  
J. Wang ◽  
J. Emile-Geay ◽  
D. Guillot ◽  
J. E. Smerdon ◽  
B. Rajaratnam

Abstract. Pseudoproxy experiments (PPEs) have become an essential framework for evaluating paleoclimate reconstruction methods. Most existing PPE studies assume constant proxy availability through time and uniform proxy quality across the pseudoproxy network. Real multi-proxy networks are, however, marked by pronounced disparities in proxy quality, and a steep decline in proxy availability back in time, either of which may have large effects on reconstruction skill. Additionally, an investigation of a real-world global multi-proxy network suggests that proxies are not exclusively indicators of local climate; rather, many are indicative of large-scale teleconnections. A suite of PPEs constructed from a millennium-length general circulation model simulation is thus designed to mimic these various real-world characteristics. The new pseudoproxy network is used to evaluate four climate field reconstruction (CFR) techniques: truncated total least square embedded within the regularized EM algorithm (RegEM-TTLS), the Mann et al. (2009) implementation of RegEM-TTLS (M09), canonical correlation analysis (CCA), and Gaussian graphical models embedded within RegEM (GraphEM). Each method's risk properties are also assessed via a 100-member noise ensemble. Contrary to expectation, it is found that reconstruction skill does not vary monotonically with proxy availability, but rather is a function of the type of climate variability (forced events vs. internal variability). The use of realistic spatiotemporal pseudoproxy characteristics also exposes large inter-method differences. Despite the comparable fidelity in reconstructing the global mean temperature, spatial skill varies considerably between CFR techniques. Both GraphEM and CCA efficiently exploit teleconnections, and produce consistent reconstructions across the ensemble. RegEM-TTLS and M09 appear advantageous for reconstructions on highly noisy data, but are subject to larger stochastic variations across different realizations of pseudoproxy noise. Results collectively highlight the importance of designing realistic pseudoproxy networks and implementing multiple noise realizations of PPEs. The results also underscore the difficulty in finding the proper bias-variance tradeoff for jointly optimizing the spatial skill of CFRs and the fidelity of the global mean reconstructions.


2018 ◽  
Author(s):  
Tine Nilsen ◽  
Johannes P. Werner ◽  
Dmitry V. Divine

Abstract. The Bayesian hierarchical model BARCAST (Bayesian Algorithm for Reconstructing Climate Anomalies in Space and Time) climate field reconstruction (CFR) technique, and idealized input data are used in the pseudoproxy experiments of this study. Ensembles of targets are generated from fields of long-range memory stochastic processes using a novel approach. The range of experiment setups include input data with different levels of persistence and levels of proxy noise, but without any form of external forcing. The input data are thereby a simplistic alternative to standard target data extracted from general circulation model (GCM) simulations. Ensemble-based temperature reconstructions are generated, representing the European landmass for a millennial time period. Hypothesis testing in the spectral domain is then used to investigate if the field and spatial mean reconstructions are consistent with either the fractional Gaussian noise (fGn) null hypothesis used for generating the target data, or the autoregressive model of order one (AR(1)) null hypothesis which is the assumed temperature model for this reconstruction technique. The study reveals that the resulting field and spatial mean reconstructions are consistent with the fGn hypothesis for most of the parameter configurations. There are local differences in reconstructed scaling characteristics between individual grid cells, and a generally better agreement with the fGn model for the spatial mean reconstruction than at individual locations. The discrepancy from an fGn is most evident for the high-frequency part of the reconstructed signal, while the long-range memory is better preserved at frequencies corresponding to decadal time scales and longer. Selected experiment setups were found to give reconstructions consistent with the AR(1) model. Reconstruction skill is measured on an ensemble member basis using selected validation metrics. Despite the mismatch between the BARCAST temporal covariance model and the model of the target, the ensemble mean was in general found to be consistent with the target data, while the estimated confidence intervals are more affected by this discrepancy. Our results show that the use of target data with a different spatiotemporal covariance structure than the BARCAST model assumption can lead to a potentially biased CFR reconstruction and associated confidence intervals, because of the wrong model assumptions.


Sign in / Sign up

Export Citation Format

Share Document