scholarly journals Evaluating climate field reconstruction techniques using improved emulations of real-world conditions

2013 ◽  
Vol 9 (3) ◽  
pp. 3015-3060 ◽  
Author(s):  
J. Wang ◽  
J. Emile-Geay ◽  
D. Guillot ◽  
J. E. Smerdon ◽  
B. Rajaratnam

Abstract. Pseudoproxy experiments (PPEs) have become an essential framework for evaluating paleoclimate reconstruction methods. Most existing PPE studies assume constant proxy availability through time and uniform proxy quality across the pseudoproxy network. Real multi-proxy networks are, however, marked by pronounced disparities in proxy quality, and a steep decline in proxy availability back in time, either of which may have large effects on reconstruction skill. Additionally, an investigation of a real-world global multi-proxy network suggests that proxies are not exclusively indicators of local climate; rather, many are indicative of large-scale teleconnections. A suite of PPEs constructed from a millennium-length general circulation model simulation is thus designed to mimic these various real-world characteristics. The new pseudoproxy network is used to evaluate four climate field reconstruction (CFR) techniques: truncated total least square embedded within the regularized EM algorithm (RegEM-TTLS), the Mann et al. (2009) implementation of RegEM-TTLS (M09), canonical correlation analysis (CCA), and Gaussian graphical models embedded within RegEM (GraphEM). Each method's risk properties are also assessed via a 100-member noise ensemble. Contrary to expectation, it is found that reconstruction skill does not vary monotonically with proxy availability, but rather is a function of the type of climate variability (forced events vs. internal variability). The use of realistic spatiotemporal pseudoproxy characteristics also exposes large inter-method differences. Despite the comparable fidelity in reconstructing the global mean temperature, spatial skill varies considerably between CFR techniques. Both GraphEM and CCA efficiently exploit teleconnections, and produce consistent reconstructions across the ensemble. RegEM-TTLS and M09 appear advantageous for reconstructions on highly noisy data, but are subject to larger stochastic variations across different realizations of pseudoproxy noise. Results collectively highlight the importance of designing realistic pseudoproxy networks and implementing multiple noise realizations of PPEs. The results also underscore the difficulty in finding the proper bias-variance tradeoff for jointly optimizing the spatial skill of CFRs and the fidelity of the global mean reconstructions.

2014 ◽  
Vol 10 (1) ◽  
pp. 1-19 ◽  
Author(s):  
J. Wang ◽  
J. Emile-Geay ◽  
D. Guillot ◽  
J. E. Smerdon ◽  
B. Rajaratnam

Abstract. Pseudoproxy experiments (PPEs) have become an important framework for evaluating paleoclimate reconstruction methods. Most existing PPE studies assume constant proxy availability through time and uniform proxy quality across the pseudoproxy network. Real multiproxy networks are, however, marked by pronounced disparities in proxy quality, and a steep decline in proxy availability back in time, either of which may have large effects on reconstruction skill. A suite of PPEs constructed from a millennium-length general circulation model (GCM) simulation is thus designed to mimic these various real-world characteristics. The new pseudoproxy network is used to evaluate four climate field reconstruction (CFR) techniques: truncated total least squares embedded within the regularized EM (expectation-maximization) algorithm (RegEM-TTLS), the Mann et al. (2009) implementation of RegEM-TTLS (M09), canonical correlation analysis (CCA), and Gaussian graphical models embedded within RegEM (GraphEM). Each method's risk properties are also assessed via a 100-member noise ensemble. Contrary to expectation, it is found that reconstruction skill does not vary monotonically with proxy availability, but also is a function of the type and amplitude of climate variability (forced events vs. internal variability). The use of realistic spatiotemporal pseudoproxy characteristics also exposes large inter-method differences. Despite the comparable fidelity in reconstructing the global mean temperature, spatial skill varies considerably between CFR techniques. Both GraphEM and CCA efficiently exploit teleconnections, and produce consistent reconstructions across the ensemble. RegEM-TTLS and M09 appear advantageous for reconstructions on highly noisy data, but are subject to larger stochastic variations across different realizations of pseudoproxy noise. Results collectively highlight the importance of designing realistic pseudoproxy networks and implementing multiple noise realizations of PPEs. The results also underscore the difficulty in finding the proper bias-variance tradeoff for jointly optimizing the spatial skill of CFRs and the fidelity of the global mean reconstructions.


2019 ◽  
Vol 8 (1) ◽  
Author(s):  
Khairunnisa Khairunnisa ◽  
Rizka Pitri ◽  
Victor P Butar-Butar ◽  
Agus M Soleh

This research used CFSRv2 data as output data general circulation model. CFSRv2 involves some variables data with high correlation, so in this research is using principal component regression (PCR) and partial least square (PLS) to solve the multicollinearity occurring in CFSRv2 data. This research aims to determine the best model between PCR and PLS to estimate rainfall at Bandung geophysical station, Bogor climatology station, Citeko meteorological station, and Jatiwangi meteorological station by comparing RMSEP value and correlation value. Size used was 3×3, 4×4, 5×5, 6×6, 7×7, 8×8, 9×9, and 11×11 that was located between (-40) N - (-90) S and 1050 E -1100 E with a grid size of 0.5×0.5 The PLS model was the best model used in stastistical downscaling in this research than PCR model because of the PLS model obtained the lower RMSEP value and the higher correlation value. The best domain and RMSEP value for Bandung geophysical station, Bogor climatology station, Citeko meteorological station, and Jatiwangi meteorological station is 9 × 9 with 100.06, 6 × 6 with 194.3, 8 × 8 with 117.6, and 6 × 6 with 108.2, respectively.


2016 ◽  
Vol 12 (5) ◽  
pp. 1181-1198 ◽  
Author(s):  
Daniel J. Lunt ◽  
Alex Farnsworth ◽  
Claire Loptson ◽  
Gavin L. Foster ◽  
Paul Markwick ◽  
...  

Abstract. During the period from approximately 150 to 35 million years ago, the Cretaceous–Paleocene–Eocene (CPE), the Earth was in a “greenhouse” state with little or no ice at either pole. It was also a period of considerable global change, from the warmest periods of the mid-Cretaceous, to the threshold of icehouse conditions at the end of the Eocene. However, the relative contribution of palaeogeographic change, solar change, and carbon cycle change to these climatic variations is unknown. Here, making use of recent advances in computing power, and a set of unique palaeogeographic maps, we carry out an ensemble of 19 General Circulation Model simulations covering this period, one simulation per stratigraphic stage. By maintaining atmospheric CO2 concentration constant across the simulations, we are able to identify the contribution from palaeogeographic and solar forcing to global change across the CPE, and explore the underlying mechanisms. We find that global mean surface temperature is remarkably constant across the simulations, resulting from a cancellation of opposing trends from solar and palaeogeographic change. However, there are significant modelled variations on a regional scale. The stratigraphic stage–stage transitions which exhibit greatest climatic change are associated with transitions in the mode of ocean circulation, themselves often associated with changes in ocean gateways, and amplified by feedbacks related to emissivity and planetary albedo. We also find some control on global mean temperature from continental area and global mean orography. Our results have important implications for the interpretation of single-site palaeo proxy records. In particular, our results allow the non-CO2 (i.e. palaeogeographic and solar constant) components of proxy records to be removed, leaving a more global component associated with carbon cycle change. This “adjustment factor” is used to adjust sea surface temperatures, as the deep ocean is not fully equilibrated in the model. The adjustment factor is illustrated for seven key sites in the CPE, and applied to proxy data from Falkland Plateau, and we provide data so that similar adjustments can be made to any site and for any time period within the CPE. Ultimately, this will enable isolation of the CO2-forced climate signal to be extracted from multiple proxy records from around the globe, allowing an evaluation of the regional signals and extent of polar amplification in response to CO2 changes during the CPE. Finally, regions where the adjustment factor is constant throughout the CPE could indicate places where future proxies could be targeted in order to reconstruct the purest CO2-induced temperature change, where the complicating contributions of other processes are minimised. Therefore, combined with other considerations, this work could provide useful information for supporting targets for drilling localities and outcrop studies.


2007 ◽  
Vol 112 (D12) ◽  
Author(s):  
Michael E. Mann ◽  
Scott Rutherford ◽  
Eugene Wahl ◽  
Caspar Ammann

2018 ◽  
Vol 35 (7) ◽  
pp. 1505-1519 ◽  
Author(s):  
Yu-Chiao Liang ◽  
Matthew R. Mazloff ◽  
Isabella Rosso ◽  
Shih-Wei Fang ◽  
Jin-Yi Yu

AbstractThe ability to construct nitrate maps in the Southern Ocean (SO) from sparse observations is important for marine biogeochemistry research, as it offers a geographical estimate of biological productivity. The goal of this study is to infer the skill of constructed SO nitrate maps using varying data sampling strategies. The mapping method uses multivariate empirical orthogonal functions (MEOFs) constructed from nitrate, salinity, and potential temperature (N-S-T) fields from a biogeochemical general circulation model simulation Synthetic N-S-T datasets are created by sampling modeled N-S-T fields in specific regions, determined either by random selection or by selecting regions over a certain threshold of nitrate temporal variances. The first 500 MEOF modes, determined by their capability to reconstruct the original N-S-T fields, are projected onto these synthetic N-S-T data to construct time-varying nitrate maps. Normalized root-mean-square errors (NRMSEs) are calculated between the constructed nitrate maps and the original modeled fields for different sampling strategies. The sampling strategy according to nitrate variances is shown to yield maps with lower NRMSEs than mapping adopting random sampling. A k-means cluster method that considers the N-S-T combined variances to identify key regions to insert data is most effective in reducing the mapping errors. These findings are further quantified by a series of mapping error analyses that also address the significance of data sampling density. The results provide a sampling framework to prioritize the deployment of biogeochemical Argo floats for constructing nitrate maps.


2018 ◽  
Vol 116 (37) ◽  
pp. 18251-18256 ◽  
Author(s):  
F. J. Beron-Vera ◽  
A. Hadjighasem ◽  
Q. Xia ◽  
M. J. Olascoaga ◽  
G. Haller

The emergence of coherent Lagrangian swirls (CLSs) among submesoscale motions in the ocean is illustrated. This is done by applying recent nonlinear dynamics tools for Lagrangian coherence detection on a surface flow realization produced by a data-assimilative submesoscale-permitting ocean general circulation model simulation of the Gulf of Mexico. Both mesoscale and submesoscale CLSs are extracted. These extractions prove the relevance of coherent Lagrangian eddies detected in satellite-altimetry–based geostrophic flow data for the arguably more realistic ageostrophic multiscale flow.


2019 ◽  
Author(s):  
Camille Risi ◽  
Joseph Galewsky ◽  
Gilles Reverdin ◽  
Florent Brient

Abstract. Understanding what controls the water vapor isotopic composition of the sub-cloud layer (SCL) over tropical oceans (δD0) is a first step towards understanding the water vapor isotopic composition everywhere in the troposphere. We propose an analytical model to predict δD0 as a function of sea surface conditions, humidity and temperature profiles, and the altitude from which the free tropospheric air originates (zorig). To do so, we extend previous studies by (1) prescribing the shape of δD0 vertical profiles, and (2) linking δD0 to zorig. The model relies on the hypotheses that δD0 profiles are steeper than mixing lines and no clouds are precipitating. We show that δD0 does not depend on the intensity of entrainment, dampening hope that δD0 measurements could help constrain this long-searched quantity. Based on an isotope-enabled general circulation model simulation, we show that δD0 variations are mainly controlled by mid-tropospheric depletion and rain evaporation in ascending regions, and by sea surface temperature and zorig in subsiding regions. When the air mixing into the SCL is lower in altitude, it is moister, and thus it depletes more efficiently the SCL. In turn, could δD0 measurements help estimate zorig and thus discriminate between different mixing processes? Estimates that are accurate enough to be useful would be difficult to achieve in practice, requiring measuring daily δD profiles, and measuring δD0 with an accuracy of 0.1 ‰ and 0.4 ‰ in trade-wind cumulus and strato-cumulus clouds respectively.


1998 ◽  
Vol 11 (8) ◽  
pp. 1883-1905 ◽  
Author(s):  
O. P. Sharma ◽  
H. Le Treut ◽  
G. Sèze ◽  
L. Fairhead ◽  
R. Sadourny

Abstract The sensitivity of the interannual variations of the summer monsoons to imposed cloudiness has been studied with a general circulation model using the initial conditions prepared from the European Centre for Medium-Range Forecasts analyses of 1 May 1987 and 1988. The cloud optical properties in this global model are calculated from prognostically computed cloud liquid water. The model successfully simulates the contrasting behavior of these two successive monsoons. However, when the optical properties of the observed clouds are specified in the model runs, the simulations show some degradation over India and its vicinity. The main cause of this degradation is the reduced land–sea temperature contrast resulting from the radiative effects of the observed clouds imposed in such simulations. It is argued that the high concentration of condensed water content of clouds over the Indian land areas will serve to limit heating of the land, thereby reducing the thermal contrast that gives rise to a weak Somali jet. A countermonsoon circulation is, therefore, simulated in the vector difference field of 850-hPa winds from the model runs with externally specified clouds. This countermonsoon circulation is associated with an equatorial heat source that is the response of the model to the radiative effects of the imposed clouds. Indeed, there are at least two clear points that can be made: 1) the cloud–SST patterns, together, affect the interannual variability; and 2) with both clouds and SST imposed, the model simulation is less sensitive to initial conditions. Additionally, the study emphasizes the importance of dynamically consistent clouds developing in response to the dynamical, thermal, and moist state of the atmosphere during model integrations.


2011 ◽  
Vol 7 (1) ◽  
pp. 249-263 ◽  
Author(s):  
A. Voigt ◽  
D. S. Abbot ◽  
R. T. Pierrehumbert ◽  
J. Marotzke

Abstract. We study the initiation of a Marinoan Snowball Earth (~635 million years before present) with the state-of-the-art atmosphere-ocean general circulation model ECHAM5/MPI-OM. This is the most sophisticated model ever applied to Snowball initiation. A comparison with a pre-industrial control climate shows that the change of surface boundary conditions from present-day to Marinoan, including a shift of continents to low latitudes, induces a global-mean cooling of 4.6 K. Two thirds of this cooling can be attributed to increased planetary albedo, the remaining one third to a weaker greenhouse effect. The Marinoan Snowball Earth bifurcation point for pre-industrial atmospheric carbon dioxide is between 95.5 and 96% of the present-day total solar irradiance (TSI), whereas a previous study with the same model found that it was between 91 and 94% for present-day surface boundary conditions. A Snowball Earth for TSI set to its Marinoan value (94% of the present-day TSI) is prevented by doubling carbon dioxide with respect to its pre-industrial level. A zero-dimensional energy balance model is used to predict the Snowball Earth bifurcation point from only the equilibrium global-mean ocean potential temperature for present-day TSI. We do not find stable states with sea-ice cover above 55%, and land conditions are such that glaciers could not grow with sea-ice cover of 55%. Therefore, none of our simulations qualifies as a "slushball" solution. While uncertainties in important processes and parameters such as clouds and sea-ice albedo suggest that the Snowball Earth bifurcation point differs between climate models, our results contradict previous findings that Snowball Earth initiation would require much stronger forcings.


Sign in / Sign up

Export Citation Format

Share Document