scholarly journals A High-Resolution 1983–2016 Tmax Climate Data Record Based on Infrared Temperatures and Stations by the Climate Hazard Center

2019 ◽  
Vol 32 (17) ◽  
pp. 5639-5658 ◽  
Author(s):  
Chris Funk ◽  
Pete Peterson ◽  
Seth Peterson ◽  
Shraddhanand Shukla ◽  
Frank Davenport ◽  
...  

Abstract Understanding the dynamics and physics of climate extremes will be a critical challenge for twenty-first-century climate science. Increasing temperatures and saturation vapor pressures may exacerbate heat waves, droughts, and precipitation extremes. Yet our ability to monitor temperature variations is limited and declining. Between 1983 and 2016, the number of observations in the University of East Anglia Climatic Research Unit (CRU) Tmax product declined precipitously (5900 → 1000); 1000 poorly distributed measurements are insufficient to resolve regional Tmax variations. Here, we show that combining long (1983 to the near present), high-resolution (0.05°), cloud-screened archives of geostationary satellite thermal infrared (TIR) observations with a dense set of ~15 000 station observations explains 23%, 40%, 30%, 41%, and 1% more variance than the CRU globally and for South America, Africa, India, and areas north of 50°N, respectively; even greater levels of improvement are shown for the 2011–16 period (28%, 45%, 39%, 52%, and 28%, respectively). Described here for the first time, the TIR Tmax algorithm uses subdaily TIR distributions to screen out cloud-contaminated observations, providing accurate (correlation ≈0.8) gridded emission Tmax estimates. Blending these gridded fields with ~15 000 station observations provides a seamless, high-resolution source of accurate Tmax estimates that performs well in areas lacking dense in situ observations and even better where in situ observations are available. Cross-validation results indicate that the satellite-only, station-only, and combined products all perform accurately (R ≈ 0.8–0.9, mean absolute errors ≈ 0.8–1.0). Hence, the Climate Hazards Center Infrared Temperature with Stations (CHIRTSmax) dataset should provide a valuable resource for climate change studies, climate extreme analyses, and early warning applications.

2014 ◽  
Vol 1 (1) ◽  
pp. 51-96 ◽  
Author(s):  
A. R. Ganguly ◽  
E. A. Kodra ◽  
A. Banerjee ◽  
S. Boriah ◽  
S. Chatterjee ◽  
...  

Abstract. Extreme events such as heat waves, cold spells, floods, droughts, tropical cyclones, and tornadoes have potentially devastating impacts on natural and engineered systems, and human communities, worldwide. Stakeholder decisions about critical infrastructures, natural resources, emergency preparedness and humanitarian aid typically need to be made at local to regional scales over seasonal to decadal planning horizons. However, credible climate change attribution and reliable projections at more localized and shorter time scales remain grand challenges. Long-standing gaps include inadequate understanding of processes such as cloud physics and ocean-land-atmosphere interactions, limitations of physics-based computer models, and the importance of intrinsic climate system variability at decadal horizons. Meanwhile, the growing size and complexity of climate data from model simulations and remote sensors increases opportunities to address these scientific gaps. This perspectives article explores the possibility that physically cognizant mining of massive climate data may lead to significant advances in generating credible predictive insights about climate extremes and in turn translating them to actionable metrics and information for adaptation and policy. Specifically, we propose that data mining techniques geared towards extremes can help tackle the grand challenges in the development of interpretable climate projections, predictability, and uncertainty assessments. To be successful, scalable methods will need to handle what has been called "Big Data" to tease out elusive but robust statistics of extremes and change from what is ultimately small data. Physically-based relationships (where available) and conceptual understanding (where appropriate) are needed to guide methods development and interpretation of results. Such approaches may be especially relevant in situations where computer models may not be able to fully encapsulate current process understanding, yet the wealth of data may offer additional insights. Large-scale interdisciplinary team efforts, involving domain experts and individual researchers who span disciplines, will be necessary to address the challenge.


2021 ◽  
Author(s):  
Jouke de Baar ◽  
Gerard van der Schrier ◽  
Irene Garcia-Marti ◽  
Else van den Besselaar

<p><strong>Objective</strong></p><p>The purpose of the European Copernicus Climate Change Service (C3S) is to support society by providing information about the past, present and future climate. For the service related to <em>in-situ</em> observations, one of the objectives is to provide high-resolution (0.1x0.1 and 0.25x0.25 degrees) gridded wind speed fields. The gridded wind fields are based on ECA&D daily average station observations for the period 1970-2020.</p><p><strong>Research question</strong> </p><p>We address the following research questions: [1] How efficiently can we provide the gridded wind fields as a statistically reliable ensemble, in order to represent the uncertainty of the gridding? [2] How efficiently can we exploit high-resolution geographical auxiliary variables (e.g. digital elevation model, terrain roughness) to augment the station data from a sparse network, in order to provide gridded wind fields with high-resolution local features?</p><p><strong>Approach</strong></p><p>In our analysis, we apply greedy forward selection linear regression (FSLR) to include the high-resolution effects of the auxiliary variables on monthly-mean data. These data provide a ‘background’ for the daily estimates. We apply cross-validation to avoid FSLR over-fitting and use full-cycle bootstrapping to create FSLR ensemble members. Then, we apply Gaussian process regression (GPR) to regress the daily anomalies. We consider the effect of the spatial distribution of station locations on the GPR gridding uncertainty.</p><p>The goal of this work is to produce several decades of daily gridded wind fields, hence, computational efficiency is of utmost importance. We alleviate the computational cost of the FSLR and GPR analyses by incorporating greedy algorithms and sparse matrix algebra in the analyses.</p><p><strong>Novelty</strong>   </p><p>The gridded wind fields are calculated as a statistical ensemble of realizations. In the present analysis, the ensemble spread is based on uncertainties arising from the auxiliary variables as well as from the spatial distribution of stations.</p><p>Cross-validation is used to tune the GPR hyper parameters. Where conventional GPR hyperparameter tuning aims at an optimal prediction of the gridded mean, instead, we tune the GPR hyperparameters for optimal prediction of the gridded ensemble spread.</p><p>Building on our experience with providing similar gridded climate data sets, this set of gridded wind fields is a novel addition to the E-OBS climate data sets.</p>


2014 ◽  
Vol 21 (4) ◽  
pp. 777-795 ◽  
Author(s):  
A. R. Ganguly ◽  
E. A. Kodra ◽  
A. Agrawal ◽  
A. Banerjee ◽  
S. Boriah ◽  
...  

Abstract. Extreme events such as heat waves, cold spells, floods, droughts, tropical cyclones, and tornadoes have potentially devastating impacts on natural and engineered systems and human communities worldwide. Stakeholder decisions about critical infrastructures, natural resources, emergency preparedness and humanitarian aid typically need to be made at local to regional scales over seasonal to decadal planning horizons. However, credible climate change attribution and reliable projections at more localized and shorter time scales remain grand challenges. Long-standing gaps include inadequate understanding of processes such as cloud physics and ocean–land–atmosphere interactions, limitations of physics-based computer models, and the importance of intrinsic climate system variability at decadal horizons. Meanwhile, the growing size and complexity of climate data from model simulations and remote sensors increases opportunities to address these scientific gaps. This perspectives article explores the possibility that physically cognizant mining of massive climate data may lead to significant advances in generating credible predictive insights about climate extremes and in turn translating them to actionable metrics and information for adaptation and policy. Specifically, we propose that data mining techniques geared towards extremes can help tackle the grand challenges in the development of interpretable climate projections, predictability, and uncertainty assessments. To be successful, scalable methods will need to handle what has been called "big data" to tease out elusive but robust statistics of extremes and change from what is ultimately small data. Physically based relationships (where available) and conceptual understanding (where appropriate) are needed to guide methods development and interpretation of results. Such approaches may be especially relevant in situations where computer models may not be able to fully encapsulate current process understanding, yet the wealth of data may offer additional insights. Large-scale interdisciplinary team efforts, involving domain experts and individual researchers who span disciplines, will be necessary to address the challenge.


2020 ◽  
Author(s):  
Firdos Khan ◽  
Jürgen Pilz ◽  
Shaukat Ali ◽  
Sher Muhammad

<p> Climate change assessment plays a pivotal role in impact assessment studies for better planning and management in different areas. A three-steps-integrated approach is used for climate change assessment. In the first step, homogeneous climatic zones were developed by combining two statistical approaches, cluster analysis and L-moment on the basis of Reconnaissance Drought Index (RDI).  A set of GCMs was selected for each climate zone by incorporating Bayesian Model Averaging (BMA), using the outputs of fourteen GCMs for maximum, minimum temperature and precipitation. The seven best GCMs were downscaled to higher resolution using statistical methods and considered for climate extremes assessment for each zone. The performances of GCMs are different for different climate variables, however, in some cases there is coincidence. Climate extremes were analyzed for the baseline and future periods F1 (2011-2040), F2 (2041-2070) and F3 (2071-2100) for the Representative Concentration Pathways (RCPs) 4.5 and 8.5. For precipitation under the RCP4.5, most of climate extremes have decreasing/increasing trends. Further, zone-01, zone-02, and zone-03 show increasing trends while zone-04 and zone-05 have mixed (decreasing/increasing) trends in climate extremes for all periods. For temperature, sixteen climate extreme indices were considered, some important indices are: GSL, SU25, TMAXmean, TMINmean, TN10p, TN90P, TX10p, TX90P, TNN, TNX, TXN, TXX. GSL has mixed trend (increasing/decreasing) depending on cold or hot climate zones. Similarly, TN10P and TN90P also show decreasing and increasing trends, respectively, while TX10P and TX90P have decreasing and increasing trends, respectively, in RCP4.5. TNN, TNX have mixed trends and TXN, TXX have mostly increasing trends except of few time periods in which they have decreasing and insignificant trends. The overall precipitation does not show significant changes, however, the projected intensities and frequencies are changing in future and require special consideration to save infrastructure, prevent casualties and other losses. More importantly, this study will help to address different Sustainable Development Goals of the United Nation Development Program related to climate change, hunger, environment, food security, and energy sectors.</p>


2018 ◽  
Vol 22 (1) ◽  
pp. 241-263 ◽  
Author(s):  
Yu Zhang ◽  
Ming Pan ◽  
Justin Sheffield ◽  
Amanda L. Siemann ◽  
Colby K. Fisher ◽  
...  

Abstract. Closing the terrestrial water budget is necessary to provide consistent estimates of budget components for understanding water resources and changes over time. Given the lack of in situ observations of budget components at anything but local scale, merging information from multiple data sources (e.g., in situ observation, satellite remote sensing, land surface model, and reanalysis) through data assimilation techniques that optimize the estimation of fluxes is a promising approach. Conditioned on the current limited data availability, a systematic method is developed to optimally combine multiple available data sources for precipitation (P), evapotranspiration (ET), runoff (R), and the total water storage change (TWSC) at 0.5∘ spatial resolution globally and to obtain water budget closure (i.e., to enforce P-ET-R-TWSC= 0) through a constrained Kalman filter (CKF) data assimilation technique under the assumption that the deviation from the ensemble mean of all data sources for the same budget variable is used as a proxy of the uncertainty in individual water budget variables. The resulting long-term (1984–2010), monthly 0.5∘ resolution global terrestrial water cycle Climate Data Record (CDR) data set is developed under the auspices of the National Aeronautics and Space Administration (NASA) Earth System Data Records (ESDRs) program. This data set serves to bridge the gap between sparsely gauged regions and the regions with sufficient in situ observations in investigating the temporal and spatial variability in the terrestrial hydrology at multiple scales. The CDR created in this study is validated against in situ measurements like river discharge from the Global Runoff Data Centre (GRDC) and the United States Geological Survey (USGS), and ET from FLUXNET. The data set is shown to be reliable and can serve the scientific community in understanding historical climate variability in water cycle fluxes and stores, benchmarking the current climate, and validating models.


2019 ◽  
Vol 11 (8) ◽  
pp. 986 ◽  
Author(s):  
Joanne Nightingale ◽  
Jonathan P.D. Mittaz ◽  
Sarah Douglas ◽  
Dick Dee ◽  
James Ryder ◽  
...  

Decision makers need accessible robust evidence to introduce new policies to mitigate and adapt to climate change. There is an increasing amount of environmental information available to policy makers concerning observations and trends relating to the climate. However, this data is hosted across a multitude of websites often with inconsistent metadata and sparse information relating to the quality, accuracy and validity of the data. Subsequently, the task of comparing datasets to decide which is the most appropriate for a certain purpose is very complex and often infeasible. In support of the European Union’s Copernicus Climate Change Service (C3S) mission to provide authoritative information about the past, present and future climate in Europe and the rest of the world, each dataset to be provided through this service must undergo an evaluation of its climate relevance and scientific quality to help with data comparisons. This paper presents the framework for Evaluation and Quality Control (EQC) of climate data products derived from satellite and in situ observations to be catalogued within the C3S Climate Data Store (CDS). The EQC framework will be implemented by C3S as part of their operational quality assurance programme. It builds on past and present international investment in Quality Assurance for Earth Observation initiatives, extensive user requirements gathering exercises, as well as a broad evaluation of over 250 data products and a more in-depth evaluation of a selection of 24 individual data products derived from satellite and in situ observations across the land, ocean and atmosphere Essential Climate Variable (ECV) domains. A prototype Content Management System (CMS) to facilitate the process of collating, evaluating and presenting the quality aspects and status of each data product to data users is also described. The development of the EQC framework has highlighted cross-domain as well as ECV specific science knowledge gaps in relation to addressing the quality of climate data sets derived from satellite and in situ observations. We discuss 10 common priority science knowledge gaps that will require further research investment to ensure all quality aspects of climate data sets can be ascertained and provide users with the range of information necessary to confidently select relevant products for their specific application.


2018 ◽  
Vol 10 (3) ◽  
pp. 504-523 ◽  
Author(s):  
Dong-Ik Kim ◽  
Dawei Han

Abstract Long term climate data are vitally important in reliably assessing water resources and water related hazards, but in-situ observations are generally sparse in space and limited in time. Although there are several global datasets available as substitutes, there is a lack of comparative studies about their suitability in different parts of the world. In this study, to find out the reliable century-long climate dataset in South Korea, we first evaluate multi-decadal reanalyses (ERA-20 cm, ERA-20c, ERA-40 and NOAA 20th century reanalysis (20CR)) and gridded observations (CRUv3.23 and GPCCv7) for monthly mean precipitation and temperature. In the temporal and statistical comparisons, CRUv3.23 and GPCCv7 for precipitation and ERA-40 for temperature perform the best, and ERA-20c and 20CR also indicate meaningful agreements. For ERA-20 cm, it has only a statistical agreement, but the mean has the difficulty in representing its ensemble. This paper also shows that the applicability of each dataset may vary by region and all products should be locally adjusted before being applied in climate impact assessments. These findings not only help to fill in the knowledge gaps about these datasets in South Korea but also provide a useful guideline for the applicability of the global datasets in different parts of the world.


Sign in / Sign up

Export Citation Format

Share Document