scholarly journals What could we learn about climate sensitivity from variability in the surface temperature record?

2020 ◽  
Author(s):  
James Douglas Annan ◽  
Julia Catherine Hargreaves ◽  
Thorsten Mauritsen ◽  
Bjorn Stevens

Abstract. We examine what can be learnt about climate sensitivity from variability in the surface air temperature record over the instrumental period, from around 1880 to the present. While many previous studies have used the trend in the time series to constrain equilibrium climate sensitivity, it has also been argued that temporal variability may also be a powerful constraint. We explore this question in the context of a simple widely used energy balance model of the climate system. We consider two recently-proposed summary measures of variability and also show how the full information content can be optimally used in this idealised scenario. We find that the constraint provided by variability is inherently skewed and its power is inversely related to the sensitivity itself, discriminating most strongly between low sensitivity values and weakening substantially for higher values. It is only when the sensitivity is very low that the variability can provide a tight constraint. Our investigations take the form of perfect model experiments, in which we make the optimistic assumption that the model is structurally perfect and all uncertainties (including the true parameter values and nature of internal variability noise) are correctly characterised. Therefore the results might be interpreted as a best case scenario for what we can learn from variability, rather than a realistic estimate of this. In these experiments, we find that for a moderate sensitivity of 2.5 °C, a 150 year time series of pure internal variability will typically support an estimate with a 5–95 % range of around 5 °C (e.g. 1.9–6.8 °C). Total variability including that due to the forced response, as observed in the detrended observational record, can provide a stronger constraint with an equivalent 5–95 % posterior range of around 4 °C (e.g. 1.7–5.6 °C) even when uncertainty in aerosol forcing is considered. Using a statistical summary of variability based on autocorrelation and the magnitude of residuals after detrending proves somewhat less powerful as a constraint than the full time series in both situations. Our results support the analysis of variability as a potentially useful tool in helping to constrain equilibrium climate sensitivity, but suggest caution in the interpretation of precise results.

2020 ◽  
Vol 11 (3) ◽  
pp. 709-719 ◽  
Author(s):  
James D. Annan ◽  
Julia C. Hargreaves ◽  
Thorsten Mauritsen ◽  
Bjorn Stevens

Abstract. We examine what can be learnt about climate sensitivity from variability in the surface air temperature record over the instrumental period, from around 1880 to the present. While many previous studies have used trends in observational time series to constrain equilibrium climate sensitivity, it has also been argued that temporal variability may also be a powerful constraint. We explore this question in the context of a simple widely used energy balance model of the climate system. We consider two recently proposed summary measures of variability and also show how the full information content can be optimally used in this idealised scenario. We find that the constraint provided by variability is inherently skewed, and its power is inversely related to the sensitivity itself, discriminating most strongly between low sensitivity values and weakening substantially for higher values. It is only when the sensitivity is very low that the variability can provide a tight constraint. Our investigations take the form of “perfect model” experiments, in which we make the optimistic assumption that the model is structurally perfect and all uncertainties (including the true parameter values and nature of internal variability noise) are correctly characterised. Therefore the results might be interpreted as a best-case scenario for what we can learn from variability, rather than a realistic estimate of this. In these experiments, we find that for a moderate sensitivity of 2.5 ∘C, a 150-year time series of pure internal variability will typically support an estimate with a 5 %–95% range of around 5 ∘C (e.g. 1.9–6.8 ∘C). Total variability including that due to the forced response, as inferred from the detrended observational record, can provide a stronger constraint with an equivalent 5 %–95 % posterior range of around 4 ∘C (e.g. 1.8–6.0 ∘C) even when uncertainty in aerosol forcing is considered. Using a statistical summary of variability based on autocorrelation and the magnitude of residuals after detrending proves somewhat less powerful as a constraint than the full time series in both situations. Our results support the analysis of variability as a potentially useful tool in helping to constrain equilibrium climate sensitivity but suggest caution in the interpretation of precise results.


2020 ◽  
Author(s):  
James Annan ◽  
Julia Hargreaves ◽  
Thorsten Mauritsen ◽  
Bjorn Stevens

<p>We examine what can be learnt about climate sensitivity from variability in the surface air temperature record over the instrumental period, from around 1880 to the present. While many previous studies have used the trend in the time series to constrain equilibrium climate sensitivity, it has recently been argued that temporal variability may also be a powerful constraint. We explore this question in the context of a simple widely used energy balance model of the climate system. We consider two recently-proposed summary measures of variability and also show how the full information content can be optimally used in this idealised scenario. We find that the constraint provided by variability is inherently skewed and its power is inversely related to the sensitivity itself, discriminating most strongly between low sensitivity values and weakening substantially for higher values. As a result of this, is only when the sensitivity is very low that the variability can provide a tight constraint. Our results support the analysis of variability as a potentially useful tool in helping to constrain equilibrium climate sensitivity, but suggest caution in the interpretation of precise results.</p>


2021 ◽  
Author(s):  
Karsten Haustein

<p class="p1">The role of external (radiative) forcing factors and internal unforced (ocean) low-frequency variations in the instrumental global temperature record are still hotly debated. More recent findings point towards a larger contribution from changes in external forcing, but the jury is still out. While the estimation of the human-induced total global warming fraction since pre-industrial times is fairly robust and mostly independent of multidecadal internal variability, this is not necessarily the case for key regional features such as Arctic amplification or enhanced warming over continental land areas. Accounting for the slow global temperature adjustment after strong volcanic eruptions, the spatially heterogeneous nature of anthropogenic aerosol forcing and known biases in the sea surface temperature record, almost all of the multidecadal fluctuations observed over at least the last 160+ years can be explained without a relevant role for internal variability. Using a two-box response model framework, I will demonstrate that not only multidecadal variability is very likely a forced response, but warming trends over the past 40+ years are entirely attributable to human factors. Repercussions for amplifed European (or D-A-CH for that matter) warming and associated implications for extreme weather events are discussed. Further consideration is given to the communications aspect of such critical results as well as the question of wider societal impacts.</p>


2020 ◽  
Author(s):  
Raphaël Hébert ◽  
Shaun Lovejoy ◽  
Bruno Tremblay

AbstractWe directly exploit the stochasticity of the internal variability, and the linearity of the forced response to make global temperature projections based on historical data and a Green’s function, or Climate Response Function (CRF). To make the problem tractable, we take advantage of the temporal scaling symmetry to define a scaling CRF characterized by the scaling exponent H, which controls the long-range memory of the climate, i.e. how fast the system tends toward a steady-state, and an inner scale $$\tau \approx 2$$ τ ≈ 2   years below which the higher-frequency response is smoothed out. An aerosol scaling factor and a non-linear volcanic damping exponent were introduced to account for the large uncertainty in these forcings. We estimate the model and forcing parameters by Bayesian inference which allows us to analytically calculate the transient climate response and the equilibrium climate sensitivity as: $$1.7^{+0.3} _{-0.2}$$ 1 . 7 - 0.2 + 0.3   K and $$2.4^{+1.3} _{-0.6}$$ 2 . 4 - 0.6 + 1.3   K respectively (likely range). Projections to 2100 according to the RCP 2.6, 4.5 and 8.5 scenarios yield warmings with respect to 1880–1910 of: $$1.5^{+0.4}_{-0.2}K$$ 1 . 5 - 0.2 + 0.4 K , $$2.3^{+0.7}_{-0.5}$$ 2 . 3 - 0.5 + 0.7   K and $$4.2^{+1.3}_{-0.9}$$ 4 . 2 - 0.9 + 1.3   K. These projection estimates are lower than the ones based on a Coupled Model Intercomparison Project phase 5 multi-model ensemble; more importantly, their uncertainties are smaller and only depend on historical temperature and forcing series. The key uncertainty is due to aerosol forcings; we find a modern (2005) forcing value of $$[-1.0, -0.3]\, \,\,\mathrm{Wm} ^{-2}$$ [ - 1.0 , - 0.3 ] Wm - 2 (90 % confidence interval) with median at $$-0.7 \,\,\mathrm{Wm} ^{-2}$$ - 0.7 Wm - 2 . Projecting to 2100, we find that to keep the warming below 1.5 K, future emissions must undergo cuts similar to RCP 2.6 for which the probability to remain under 1.5 K is 48 %. RCP 4.5 and RCP 8.5-like futures overshoot with very high probability.


2018 ◽  
Author(s):  
Andrew E. Dessler ◽  
Thorsten Mauritsen ◽  
Bjorn Stevens

Abstract. Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100-member ensemble of the MPI-ESM1.1 climate model simulations of the period 1850–2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K. The spread in the ensemble is related to the central hypothesis in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500-hPa tropical temperature better describes the planet's energy balance.


2013 ◽  
Vol 4 (2) ◽  
pp. 785-852 ◽  
Author(s):  
R. B. Skeie ◽  
T. Berntsen ◽  
M. Aldrin ◽  
M. Holden ◽  
G. Myhre

Abstract. The equilibrium climate sensitivity (ECS) is constrained based on observed near-surface temperature change, changes in ocean heat content (OHC) and detailed radiative forcing (RF) time series from pre-industrial times to 2010 for all main anthropogenic and natural forcing mechanism. The RF time series are linked to the observations of OHC and temperature change through an energy balance model and a stochastic model, using a Bayesian approach to estimate the ECS and other unknown parameters from the data. For the net anthropogenic RF the posterior mean in 2010 is 2.1 W m−2 with a 90% credible interval (C.I.) of 1.3 to 2.8 W m−2, excluding present day total aerosol effects (direct + indirect) stronger than −1.7 W m−2. The posterior mean of the ECS is 1.8 °C with 90% C.I. ranging from 0.9 to 3.2 °C which is tighter than most previously published estimates. We find that using 3 OHC data sets simultaneously substantially narrows the range in ECS, while using only one set and similar time periods can produce comparable results as previously published estimates including the heavy tail in the probability function. The use of additional 10 yr of data for global mean temperature change and ocean heat content data narrow the probability density function of the ECS. In addition when data only until year 2000 is used the estimated mean of ECS is 20% higher. Explicitly accounting for internal variability widens the 90% C.I. for the ECS by 60%, while the mean ECS only becomes slightly higher.


2021 ◽  
Author(s):  
Maria Buyanova ◽  
Sergey Kravtsov ◽  
Andrey Gavrilov ◽  
Dmitry Mukhin ◽  
Evgeny Loskutov ◽  
...  

<p>An analysis of the climate system is usually complicated by its very high dimensionality and its nonlinearity which impedes spatial and time scale separation. An even more difficult problem is to obtain separate estimates of the climate system’s response to external forcing (e.g. anthropogenic emissions of greenhouse gases and aerosols) and the contribution of the climate system’s internal variability into recent climate trends. Identification of spatiotemporal climatic patterns representing forced signals and internal variability in global climate models (GCMs) would make it possible to characterize these patterns in the observed data and to analyze dynamical relationships between these two types of climate variability.</p><p>In contrast with real climate observations, many GCMs are able to provide ensembles of many climate realizations under the same external forcing, with relatively independent initial conditions (e.g. LENS [1], MPI-GE [2], CMIP ensembles of 20th century climate). In this report, a recently developed method of empirical spatio-temporal data decomposition into linear dynamical modes (LDMs) [3] based on Bayesian approach, is modified to address the problem of self-consistent separation of the climate system internal variability modes and the forced response signals in such ensembles. The LDM method provides the time series of principal components and corresponding spatial patterns; in application to an ensemble of realizations, it determines both time series of the internal variability modes of current realization and the time series of forced response (defined as signal shared by all realizations). The advantage of LDMs is the ability to take into account the time scales of the system evolution better than some other linear techniques, e.g. traditional empirical orthogonal function decomposition. Furthermore, the modified ensemble LDM (E-LDM) method is designed to determine the optimal number of principal components and to distinguish their time scales for both internal variability modes and forced response signals.</p><p>The technique and results of applying LDM method to different GCM ensemble realizations will be presented and discussed. This research was supported by the Russian Science Foundation (Grant No. 18-12-00231).</p><p>[1] Kay, J. E., Deser, C., Phillips, A., Mai, A., Hannay, C., Strand, G., Arblaster, J., Bates, S., Danabasoglu, G., Edwards, J., Holland, M. Kushner, P., Lamarque, J.-F., Lawrence, D., Lindsay, K., Middleton, A., Munoz, E., Neale, R., Oleson, K., Polvani, L., and M. Vertenstein (2015), The Community Earth System Model (CESM) Large Ensemble Project: A Community Resource for Studying Climate Change in the Presence of Internal Climate Variability, Bulletin of the American Meteorological Society, doi: 10.1175/BAMS-D-13-00255.1, 96, 1333-1349 </p><p>[2] Maher, N., Milinski, S., Suarez-Gutierrez, L., Botzet, M., Dobrynin, M., Kornblueh, L., Kröger, J., Takano, Y., Ghosh, R., Hedemann, C., Li, C., Li, H., Manzini, E., Notz, N., Putrasahan, D., Boysen, L., Claussen, M., Ilyina, T., Olonscheck, D., Raddatz, T., Stevens, B. and Marotzke, J. (2019). The Max Planck Institute Grand Ensemble: Enabling the Exploration of Climate System Variability. Journal of Advances in Modeling Earth Systems, 11, 1-21. https://doi.org/10.1029/2019MS001639</p><p>[3] Gavrilov, A., Kravtsov, S., Mukhin, D. (2020). Analysis of 20th century surface air temperature using linear dynamical modes. Chaos: An Interdisciplinary Journal of Nonlinear Science, 30(12), 123110. https://doi.org/10.1063/5.0028246</p>


2018 ◽  
Vol 18 (7) ◽  
pp. 5147-5155 ◽  
Author(s):  
Andrew E. Dessler ◽  
Thorsten Mauritsen ◽  
Bjorn Stevens

Abstract. Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods, and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPI-ESM1.1) simulations of the period 1850–2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K. The spread in the ensemble is related to the central assumption in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that this assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500 hPa tropical temperature better describes the planet's energy balance.


2020 ◽  
Vol 33 (10) ◽  
pp. 4121-4140 ◽  
Author(s):  
Jonah Bloch-Johnson ◽  
Maria Rugenstein ◽  
Dorian S. Abbot

AbstractThe sensitivity of the climate to CO2 forcing depends on spatially varying radiative feedbacks that act both locally and nonlocally. We assess whether a method employing multiple regression can be used to estimate local and nonlocal radiative feedbacks from internal variability. We test this method on millennial-length simulations performed with six coupled atmosphere–ocean general circulation models (AOGCMs). Given the spatial pattern of warming, the method does quite well at recreating the top-of-atmosphere flux response for most regions of Earth, except over the Southern Ocean where it consistently overestimates the change, leading to an overestimate of the sensitivity. For five of the six models, the method finds that local feedbacks are positive due to cloud processes, balanced by negative nonlocal shortwave cloud feedbacks associated with regions of tropical convection. For four of these models, the magnitudes of both are comparable to the Planck feedback, so that changes in the ratio between them could lead to large changes in climate sensitivity. The positive local feedback explains why observational studies that estimate spatial feedbacks using only local regressions predict an unstable climate. The method implies that sensitivity in these AOGCMs increases over time due to a reduction in the share of warming occurring in tropical convecting regions and the resulting weakening of associated shortwave cloud and longwave clear-sky feedbacks. Our results provide a step toward an observational estimate of time-varying climate sensitivity by demonstrating that many aspects of spatial feedbacks appear to be the same between internal variability and the forced response.


Atmosphere ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 147
Author(s):  
Nicola Scafetta

Climate changes are due to anthropogenic factors, volcano eruptions and the natural variability of the Earth’s system. Herein the natural variability of the global surface temperature is modeled using a set of harmonics spanning from the inter-annual to the millennial scales. The model is supported by the following considerations: (1) power spectrum evaluations show 11 spectral peaks (from the sub-decadal to the multi-decadal scales) above the 99% confidence level of the known temperature uncertainty; (2) spectral coherence analysis between the independent global surface temperature periods 1861–1937 and 1937–2013 highlights at least eight common frequencies between 2- and 20-year periods; (3) paleoclimatic temperature reconstructions during the Holocene present secular to millennial oscillations. The millennial oscillation was responsible for the cooling observed from the Medieval Warm Period (900–1400) to the Little Ice Age (1400–1800) and, on average, could have caused about 50% of the warming observed since 1850. The finding implies an equilibrium climate sensitivity of 1.0–2.3 °C for CO2 doubling likely centered around 1.5 °C. This low sensitivity to radiative forcing agrees with the conclusions of recent studies. Semi-empirical models since 1000 A.D. are developed using 13 identified harmonics (representing the natural variability of the climate system) and a climatic function derived from the Coupled Model Intercomparison Project 5 (CMIP5) model ensemble mean simulation (representing the mean greenhouse gas—GHG, aerosol, and volcano temperature contributions) scaled under the assumption of an equilibrium climate sensitivity of 1.5 °C. The harmonic model is evaluated using temperature data from 1850 to 2013 to test its ability to predict the major temperature patterns observed in the record from 2014 to 2020. In the short, medium, and long time scales the semi-empirical models predict: (1) temperature maxima in 2015–2016 and 2020, which is confirmed by the 2014–2020 global temperature record; (2) a relatively steady global temperature from 2000 to 2030–2040; (3) a 2000–2100 mean projected global warming of about 1 °C. The semi-empirical model reconstructs accurately the historical surface temperature record since 1850 and hindcasts mean surface temperature proxy reconstructions since the medieval period better than the model simulation that is unable to simulate the Medieval Warm Period.


Sign in / Sign up

Export Citation Format

Share Document