Twentieth-Century Surface Air Temperature over China and the Globe Simulated by Coupled Climate Models

2006 ◽  
Vol 19 (22) ◽  
pp. 5843-5858 ◽  
Author(s):  
Tianjun Zhou ◽  
Rucong Yu

Abstract This paper examines variations of the surface air temperature (SAT) over China and the globe in the twentieth century simulated by 19 coupled climate models driven by historical natural and anthropogenic forcings. Most models perform well in simulating both the global and the Northern Hemispheric mean SAT evolutions of the twentieth century. The inclusion of natural forcings improves the simulation, in particular for the first half of the century. The reproducibility of the SAT averaged over China is lower than that of the global and hemispheric averages, but it is still acceptable. The contribution of natural forcings to the SAT over China in the first half of the century is not as robust as that to the global and hemispheric averages. No model could successfully produce the reconstructed warming over China in the 1920s. The prescribed natural and anthropogenic forcings in the coupled climate models mainly produce the warming trends and the decadal- to interdecadal-scale SAT variations with poor performances at shorter time scales. The prominent warming trend in the last half of the century over China and its acceleration in recent decades are weakly simulated. There are discrepancies between the simulated and observed regional features of the SAT trend over China. Few models could produce the summertime cooling over the middle part of eastern China (27°–36°N), while two models acceptably produce the meridional gradients of the wintertime warming trends, with north China experiencing larger warming. Limitations of the current state-of-the-art coupled climate models in simulating spatial patterns of the twentieth-century SAT over China cast a shadow upon their capability toward projecting credible geographical distributions of future climate change through Intergovernmental Panel on Climate Change (IPCC) scenario simulations.

2021 ◽  
Author(s):  
Thordis Thorarinsdottir ◽  
Jana Sillmann ◽  
Marion Haugen ◽  
Nadine Gissibl ◽  
Marit Sandstad

<p>Reliable projections of extremes in near-surface air temperature (SAT) by climate models become more and more important as global warming is leading to significant increases in the hottest days and decreases in coldest nights around the world with considerable impacts on various sectors, such as agriculture, health and tourism.</p><p>Climate model evaluation has traditionally been performed by comparing summary statistics that are derived from simulated model output and corresponding observed quantities using, for instance, the root mean squared error (RMSE) or mean bias as also used in the model evaluation chapter of the fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). Both RMSE and mean bias compare averages over time and/or space, ignoring the variability, or the uncertainty, in the underlying values. Particularly when interested in the evaluation of climate extremes, climate models should be evaluated by comparing the probability distribution of model output to the corresponding distribution of observed data.</p><p>To address this shortcoming, we use the integrated quadratic distance (IQD) to compare distributions of simulated indices to the corresponding distributions from a data product. The IQD is the proper divergence associated with the proper continuous ranked probability score (CRPS) as it fulfills essential decision-theoretic properties for ranking competing models and testing equality in performance, while also assessing the full distribution.</p><p>The IQD is applied to evaluate CMIP5 and CMIP6 simulations of monthly maximum (TXx) and minimum near-surface air temperature (TNn) over the data-dense regions Europe and North America against both observational and reanalysis datasets. There is not a notable difference between the model generations CMIP5 and CMIP6 when the model simulations are compared against the observational dataset HadEX2. However, the CMIP6 models show a better agreement with the reanalysis ERA5 than CMIP5 models, with a few exceptions. Overall, the climate models show higher skill when compared against ERA5 than when compared against HadEX2. While the model rankings vary with region, season and index, the model evaluation is robust against changes in the grid resolution considered in the analysis.</p>


2015 ◽  
Vol 28 (23) ◽  
pp. 9188-9205 ◽  
Author(s):  
Nicholas R. Cavanaugh ◽  
Samuel S. P. Shen

Abstract This paper explores the effects from averaging weather station data onto a grid on the first four statistical moments of daily minimum and maximum surface air temperature (SAT) anomalies over the entire globe. The Global Historical Climatology Network–Daily (GHCND) and the Met Office Hadley Centre GHCND (HadGHCND) datasets from 1950 to 2010 are examined. The GHCND station data exhibit large spatial patterns for each moment and statistically significant moment trends from 1950 to 2010, indicating that SAT probability density functions are non-Gaussian and have undergone characteristic changes in shape due to decadal variability and/or climate change. Comparisons with station data show that gridded averages always underestimate observed variability, particularly in the extremes, and have altered moment trends that are in some cases opposite in sign over large geographic areas. A statistical closure approach based on the quasi-normal approximation is taken to explore SAT’s higher-order moments and point correlation structure. This study focuses specifically on relating variability calculated from station data to that from gridded data through the moment equations for weighted sums of random variables. The higher-order and nonlinear spatial correlations up to the fourth order demonstrate that higher-order moments at grid scale can be determined approximately by functions of station pair correlations that tend to follow the usual Kolmogorov scaling relation. These results can aid in the development of constraints to reduce uncertainties in climate models and have implications for studies of atmospheric variability, extremes, and climate change using gridded observations.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Zhenchun Hao ◽  
Qin Ju ◽  
Weijuan Jiang ◽  
Changjun Zhu

The Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4) presents twenty-two global climate models (GCMs). In this paper, we evaluate the ability of 22 GCMs to reproduce temperature and precipitation over the Tibetan Plateau by comparing with ground observations for 1961~1900. The results suggest that all the GCMs underestimate surface air temperature and most models overestimate precipitation in most regions on the Tibetan Plateau. Only a few models (each 5 models for precipitation and temperature) appear roughly consistent with the observations in annual temperature and precipitation variations. Comparatively, GFCM21 and CGMR are able to better reproduce the observed annual temperature and precipitation variability over the Tibetan Plateau. Although the scenarios predicted by the GCMs vary greatly, all the models predict consistently increasing trends in temperature and precipitation in most regions in the Tibetan Plateau in the next 90 years. The results suggest that the temperature and precipitation will both increase in all three periods under different scenarios, with scenario A1 increasing the most and scenario A1B increasing the least.


2017 ◽  
Vol 30 (16) ◽  
pp. 6521-6541 ◽  
Author(s):  
Sumant Nigam ◽  
Natalie P. Thomas ◽  
Alfredo Ruiz-Barradas ◽  
Scott J. Weaver

The linear trend in twentieth-century surface air temperature (SAT)—a key secular warming signal—exhibits striking seasonal variations over Northern Hemisphere continents; SAT trends are pronounced in winter and spring but notably weaker in summer and fall. The SAT trends in historical twentieth-century climate simulations informing the Intergovernmental Panel for Climate Change’s Fifth Assessment show varied (and often unrealistic) strength and structure, and markedly weaker seasonal variation. The large intra-ensemble spread of winter SAT trends in some historical simulations was surprising, especially in the context of century-long linear trends, with implications for the detection of the secular warming signal. The striking seasonality of observed secular warming over northern continents warrants an explanation and the representation of related processes in climate models. Here, the seasonality of SAT trends over North America is shown to result from land surface–hydroclimate interactions and, to an extent, also from the secular change in low-level atmospheric circulation and related thermal advection. It is argued that the winter dormancy and summer vigor of the hydrologic cycle over middle- to high-latitude continents permit different responses to the additional incident radiative energy from increasing greenhouse gas concentrations. The seasonal cycle of climate, despite its monotony, provides an expanded phase space for the exposition of the dynamical and thermodynamical processes generating secular warming, and an exceptional cost-effective opportunity for benchmarking climate projection models.


2020 ◽  
Vol 12 (2) ◽  
pp. 218 ◽  
Author(s):  
José Antonio Sobrino ◽  
Yves Julien ◽  
Susana García-Monteiro

The Intergovernmental Panel on Climate Change regular scientific assessments of global warming is based on measurements of air temperature from weather stations, buoys or ships. More specifically, air temperature annual means are estimated from their integration into climate models, with some areas (Africa, Antarctica, seas) being clearly underrepresented. Present satellites allow estimation of surface temperature for a full coverage of our planet with a sub-daily revisit frequency and kilometric resolution. In this work, a simple methodology is developed that allows estimating the surface temperature of Planet Earth with MODIS Terra and Aqua land and sea surface temperature products, as if the whole planet was reduced to a single pixel. The results, through a completely independent methodology, corroborate the temperature anomalies retrieved from climate models and show a linear warming trend of 0.018 ± 0.007 °C/yr.


2006 ◽  
Vol 19 (5) ◽  
pp. 723-740 ◽  
Author(s):  
R. J. Stouffer ◽  
A. J. Broccoli ◽  
T. L. Delworth ◽  
K. W. Dixon ◽  
R. Gudgel ◽  
...  

Abstract The climate response to idealized changes in the atmospheric CO2 concentration by the new GFDL climate model (CM2) is documented. This new model is very different from earlier GFDL models in its parameterizations of subgrid-scale physical processes, numerical algorithms, and resolution. The model was constructed to be useful for both seasonal-to-interannual predictions and climate change research. Unlike previous versions of the global coupled GFDL climate models, CM2 does not use flux adjustments to maintain a stable control climate. Results from two model versions, Climate Model versions 2.0 (CM2.0) and 2.1 (CM2.1), are presented. Two atmosphere–mixed layer ocean or slab models, Slab Model versions 2.0 (SM2.0) and 2.1 (SM2.1), are constructed corresponding to CM2.0 and CM2.1. Using the SM2 models to estimate the climate sensitivity, it is found that the equilibrium globally averaged surface air temperature increases 2.9 (SM2.0) and 3.4 K (SM2.1) for a doubling of the atmospheric CO2 concentration. When forced by a 1% per year CO2 increase, the surface air temperature difference around the time of CO2 doubling [transient climate response (TCR)] is about 1.6 K for both coupled model versions (CM2.0 and CM2.1). The simulated warming is near the median of the responses documented for the climate models used in the 2001 Intergovernmental Panel on Climate Change (IPCC) Working Group I Third Assessment Report (TAR). The thermohaline circulation (THC) weakened in response to increasing atmospheric CO2. By the time of CO2 doubling, the weakening in CM2.1 is larger than that found in CM2.0: 7 and 4 Sv (1 Sv ≡ 106 m3 s−1), respectively. However, the THC in the control integration of CM2.1 is stronger than in CM2.0, so that the percentage change in the THC between the two versions is more similar. The average THC change for the models presented in the TAR is about 3 or 4 Sv; however, the range across the model results is very large, varying from a slight increase (+2 Sv) to a large decrease (−10 Sv).


2015 ◽  
Vol 7 (1) ◽  
pp. 103-113 ◽  
Author(s):  
Dmitry Basharin ◽  
Alexander Polonsky ◽  
Gintautas Stankūnavičius

An assessment of the plausible climate change in precipitation and surface air temperature (SAT) over the European region by the end of the 21st century is provided. The assessment is based on the results of output of the ocean–atmosphere models participating in the Coupled Model Intercomparison Project, phase 5 (CMIP5). Six climate models that best reproduce the historical behaviour of SAT over greater Europe were selected from the CMIP5 project using a performance-based selection method of CMIP5 general circulation models for further assessments. The analysis of historical simulations within the scope of the CMIP5 project reveals that six models (namely, CNRM-CM5, HadGEM2ES, GFDL-CM3, CanESM2, MIROC5 and MPI-ESM-LR) sufficiently reproduce historical tendencies and natural variability over the region of interest. The climate change in SAT and precipitation by the end of the 21st century (2070–2099) was examined within the scope of RCP4.5 and RCP8.5 scenarios for these selected models. Typical regional warming due to RCP4.5 (RCP8.5) scenario is assessed as 3–4.5 °C (as 4–8 °C) in summer and winter, while a significant reduction of precipitation (typically 20–40%) is obtained only in summer.


2007 ◽  
Vol 20 (12) ◽  
pp. 2769-2790 ◽  
Author(s):  
Seung-Ki Min ◽  
Andreas Hense

Abstract A Bayesian approach is applied to the observed regional and seasonal surface air temperature (SAT) changes using single-model ensembles (SMEs) with the ECHO-G model and multimodel ensembles (MMEs) of the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) simulations. Bayesian decision classifies observations into the most probable scenario out of six available scenarios: control (CTL), natural forcing (N), anthropogenic forcing (ANTHRO), greenhouse gas (G), sulfate aerosols (S), and natural plus anthropogenic forcing (ALL). Space–time vectors of the detection variable are constructed for six continental regions (North America, South America, Asia, Africa, Australia, and Europe) by combining temporal components of SATs (expressed as Legendre coefficients) from two or three subregions of each continental region. Bayesian decision results show that over most of the regions observed SATs are classified into ALL or ANTHRO scenarios for the whole twentieth century and its second half. Natural forcing and ALL scenarios are decided during the first half of the twentieth century, but only in the low-latitude region (Africa and South America), which might be related to response patterns to solar forcing. Overall seasonal decisions follow annual results, but there are notable seasonal dependences that differ between regions. A comparison of SME and MME results demonstrates that the Bayesian decisions for regional-scale SATs are largely robust to intermodel uncertainties as well as prior probability and temporal scales, as found in the global results.


Water ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1109
Author(s):  
Nobuaki Kimura ◽  
Kei Ishida ◽  
Daichi Baba

Long-term climate change may strongly affect the aquatic environment in mid-latitude water resources. In particular, it can be demonstrated that temporal variations in surface water temperature in a reservoir have strong responses to air temperature. We adopted deep neural networks (DNNs) to understand the long-term relationships between air temperature and surface water temperature, because DNNs can easily deal with nonlinear data, including uncertainties, that are obtained in complicated climate and aquatic systems. In general, DNNs cannot appropriately predict unexperienced data (i.e., out-of-range training data), such as future water temperature. To improve this limitation, our idea is to introduce a transfer learning (TL) approach. The observed data were used to train a DNN-based model. Continuous data (i.e., air temperature) ranging over 150 years to pre-training to climate change, which were obtained from climate models and include a downscaling model, were used to predict past and future surface water temperatures in the reservoir. The results showed that the DNN-based model with the TL approach was able to approximately predict based on the difference between past and future air temperatures. The model suggested that the occurrences in the highest water temperature increased, and the occurrences in the lowest water temperature decreased in the future predictions.


Sign in / Sign up

Export Citation Format

Share Document