An empirical approach to the analysis of local and global climate and weather data and to the determination of CO2 sensitivities

Author(s):  
Wolf Timm

Abstract Some freely available global temperature data sets which document the weather for a period of over 100 years, e.g. from NASA, from NOAA, additionally also local data e.g. for Germany (DWD) were analyzed in order to derive meaningful empirical long-term trends with suitable multi-annual averages. This is first demonstrated using global climate data with different approaches, whereby the results are to a high degree consistent. Analyzes of the German temperature and weather data and of climate data from other continents are carried out in a similar manner. For reliable forecasts it is important to determine the CO2 sensitivity as precisely as possible. A very simple method is to smooth out temperatures over 20 years at a time. If these values are plotted at intervals of 10 years over the associated (also averaged) CO2 content, the temperature database (since 1961) is condensed to 5 data points and a statement can be made about the quality of the linearity for the respective database. Both the NASA data and the NOAA data show an unusually good linearity with almost identical CO2 sensitivity (approx. 0.0105 K/ppm CO2). This indicates that the long-term trend in global temperature since around 1960 has been largely determined solely by greenhouse gases. If the regional weather data is used as a basis, there is also in many cases strict linearity with increasing CO2 content. The analysis of the regional data allows the conclusion that there is approximately a specific CO2 sensitivity for every region on earth with specific statistical uncertainties: For mean global land, it is 0.017 K, for Germany it is 0.022 K, and for Alaska even 0.028 K per ppm CO2 .

Author(s):  
G. Bracho-Mujica ◽  
P.T. Hayman ◽  
V.O. Sadras ◽  
B. Ostendorf

Abstract Process-based crop models are a robust approach to assess climate impacts on crop productivity and long-term viability of cropping systems. However, these models require high-quality climate data that cannot always be met. To overcome this issue, the current research tested a simple method for scaling daily data and extrapolating long-term risk profiles of modelled crop yields. An extreme situation was tested, in which high-quality weather data was only available at one single location (reference site: Snowtown, South Australia, 33.78°S, 138.21°E), and limited weather data was available for 49 study sites within the Australian grain belt (spanning from 26.67 to 38.02°S of latitude, and 115.44 to 151.85°E of longitude). Daily weather data were perturbed with a delta factor calculated as the difference between averaged climate data from the reference site and the study sites. Risk profiles were built using a step-wise combination of adjustments from the most simple (adjusted series of precipitation only) to the most detailed (adjusted series of precipitation, temperatures and solar radiation), and a variable record length (from 10 to 100 years). The simplest adjustment and shortest record length produced bias of modelled yield grain risk profiles between −10 and 10% in 41% of the sites, which increased to 86% of the study sites with the most detailed adjustment and longest record (100 years). Results indicate that the quality of the extrapolation of risk profiles was more sensitive to the number of adjustments applied rather than the record length per se.


2021 ◽  
Author(s):  
Erik Engström ◽  
Cesar Azorin-Molina ◽  
Lennart Wern ◽  
Sverker Hellström ◽  
Christophe Sturm ◽  
...  

<p>Here we present the progress of the first work package (WP1) of the project “Assessing centennial wind speed variability from a historical weather data rescue project in Sweden” (WINDGUST), funded by FORMAS – A Swedish Research Council for Sustainable Development (ref. 2019-00509); previously introduced in EGU2019-17792-1 and EGU2020-3491. In a global climate change, one of the major uncertainties on the causes driving the climate variability of winds (i.e., the “stilling” phenomenon and the recent “recovery” since the 2010s) is mainly due to short availability (i.e., since the 1960s) and low quality of observed wind records as stated by the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC).</p><p>The WINDGUST is a joint initiative between the Swedish Meteorological and Hydrological Institute (SMHI) and the University of Gothenburg aimed at filling the key gap of short availability and low quality of wind datasets, and improve the limited knowledge on the causes driving wind speed variability in a changing climate across Sweden.</p><p>During 2020, we worked in WP1 to rescue historical wind speed series available in the old weather archives at SMHI for the 1920s-1930s. In the process we followed the “Guidelines on Best Practices for Climate Data Rescue” of the World Meteorological Organization. Our protocol consisted on: (i) designing a template for digitization; (ii) digitizing papers by an imaging process based on scanning and photographs; and (iii) typing numbers of wind speed data into the template. We will report the advances and current status, challenges and experiences learned during the development of WP1. Until new year 2020/2021 eight out of thirteen selected stations spanning over the years 1925 to 1948 have been scanned and digitized by three staff members of SMHI during 1,660 manhours.</p>


2018 ◽  
Vol 10 (10) ◽  
pp. 1651 ◽  
Author(s):  
Bikhtiyar Ameen ◽  
Heiko Balzter ◽  
Claire Jarvis ◽  
Etienne Wey ◽  
Claire Thomas ◽  
...  

Several sectors need global horizontal irradiance (GHI) data for various purposes. However, the availability of a long-term time series of high quality in situ GHI measurements is limited. Therefore, several studies have tried to estimate GHI by re-analysing climate data or satellite images. Validation is essential for the later use of GHI data in the regions with a scarcity of ground-recorded data. This study contributes to previous studies that have been carried out in the past to validate HelioClim-3 version 5 (HC3v5) and the Copernicus Atmosphere Monitoring Service, using radiation service version 3 (CRSv3) data of hourly GHI from satellite-derived datasets (SDD) with nine ground stations in northeast Iraq, which have not been used previously. The validation is carried out with station data at the pixel locations and two other data points in the vicinity of each station, which is something that is rarely seen in the literature. The temporal and spatial trends of the ground data are well captured by the two SDDs. Correlation ranges from 0.94 to 0.97 in all-sky and clear-sky conditions in most cases, while for cloudy-sky conditions, it is between 0.51–0.72 and 0.82–0.89 for the clearness index. The bias is negative for most of the cases, except for three positive cases. It ranges from −7% to 4%, and −8% to 3% for the all-sky and clear-sky conditions, respectively. For cloudy-sky conditions, the bias is positive, and differs from one station to another, from 16% to 85%. The root mean square error (RMSE) ranges between 12–20% and 8–12% for all-sky and clear-sky conditions, respectively. In contrast, the RMSE range is significantly higher in cloudy-sky conditions: above 56%. The bias and RMSE for the clearness index are nearly the same as those for the GHI for all-sky conditions. The spatial variability of hourly GHI SDD differs only by 2%, depending on the station location compared to the data points around each station. The variability of two SDDs is quite similar to the ground data, based on the mean and standard deviation of hourly GHI in a month. Having station data at different timescales and the small number of stations with GHI records in the region are the main limitations of this analysis.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Mario Krapp ◽  
Robert M. Beyer ◽  
Stephen L. Edmundson ◽  
Paul J. Valdes ◽  
Andrea Manica

AbstractCurated global climate data have been generated from climate model outputs for the last 120,000 years, whereas reconstructions going back even further have been lacking due to the high computational cost of climate simulations. Here, we present a statistically-derived global terrestrial climate dataset for every 1,000 years of the last 800,000 years. It is based on a set of linear regressions between 72 existing HadCM3 climate simulations of the last 120,000 years and external forcings consisting of CO2, orbital parameters, and land type. The estimated climatologies were interpolated to 0.5° resolution and bias-corrected using present-day climate. The data compare well with the original HadCM3 simulations and with long-term proxy records. Our dataset includes monthly temperature, precipitation, cloud cover, and 17 bioclimatic variables. In addition, we derived net primary productivity and global biome distributions using the BIOME4 vegetation model. The data are a relevant source for different research areas, such as archaeology or ecology, to study the long-term effect of glacial-interglacial climate cycles for periods beyond the last 120,000 years.


2019 ◽  
Author(s):  
Andrea K. Steiner ◽  
Florian Ladstädter ◽  
Chi O. Ao ◽  
Hans Gleisner ◽  
Shu-Peng Ho ◽  
...  

Abstract. Atmospheric climate monitoring requires observations of high-quality conforming to the criteria of the Global Climate Observing System (GCOS). Radio occultation (RO) data based on Global Positioning System (GPS) signals are available since 2001 from several satellite missions with global coverage, high accuracy, and high vertical resolution in the troposphere and lower stratosphere. We assess the consistency and long-term stability of multi-satellite RO observations for use as climate data records. As a measure of long-term stability, we quantify the structural uncertainty of RO data products arising from different processing schemes. We analyze atmospheric variables from bending angle to temperature for four RO missions, CHAMP, Formosat-3/COSMIC, GRACE, and Metop, provided by five data centers. The comparisons are based on profile-to-profile differences, aggregated to monthly means. Structural uncertainty in trends is found lowest from 8 km to 25 km altitude globally for all inspected RO variables and missions. For temperature, it is < 0.05 K per decade in the global mean and < 0.1 K per decade at all latitudes. Above 25 km, the uncertainty increases for CHAMP while data from the other missions are based on advanced receivers and are usable to higher altitudes for climate trend studies: dry temperature to 35 km, refractivity to 40 km, and bending angle to 50 km. Larger differences in RO data at high altitudes and latitudes are mainly due to different implementation choices in the retrievals. The intercomparison helped to further enhance the maturity of the RO record and confirms the climate quality of multi-satellite RO observations towards establishing a GCOS climate data record.


Eos ◽  
2021 ◽  
Vol 102 ◽  
Author(s):  
Sarah Derouin

Gridded climate data sets are just as effective as weather station data at assessing human mortality risk related to heat and cold, researchers suggest.


2015 ◽  
Vol 25 (07) ◽  
pp. 1550029 ◽  
Author(s):  
Enrique Castillo ◽  
Diego Peteiro-Barral ◽  
Bertha Guijarro Berdiñas ◽  
Oscar Fontenla-Romero

This paper presents a novel distributed one-class classification approach based on an extension of the ν-SVM method, thus permitting its application to Big Data data sets. In our method we will consider several one-class classifiers, each one determined using a given local data partition on a processor, and the goal is to find a global model. The cornerstone of this method is the novel mathematical formulation that makes the optimization problem separable whilst avoiding some data points considered as outliers in the final solution. This is particularly interesting and important because the decision region generated by the method will be unaffected by the position of the outliers and the form of the data will fit more precisely. Another interesting property is that, although built in parallel, the classifiers exchange data during learning in order to improve their individual specialization. Experimental results using different datasets demonstrate the good performance in accuracy of the decision regions of the proposed method in comparison with other well-known classifiers while saving training time due to its distributed nature.


2007 ◽  
Vol 104 (18) ◽  
pp. 7461-7465 ◽  
Author(s):  
Ronald E. Thresher ◽  
J. A. Koslow ◽  
A. K. Morison ◽  
D. C. Smith

The oceanographic consequences of climate change are increasingly well documented, but the biological impacts of this change on marine species much less so, in large part because of few long-term data sets. Using otolith analysis, we reconstructed historical changes in annual growth rates for the juveniles of eight long-lived fish species in the southwest Pacific, from as early as 1861. Six of the eight species show significant changes in growth rates during the last century, with the pattern differing systematically with depth. Increasing temperatures near the ocean surface correlate with increasing growth rates by species found in depths <250 m, whereas growth rates of deep-water (>1,000 m) species have declined substantially during the last century, which correlates with evidence of long-term cooling at these depths. The observations suggest that global climate change has enhanced some elements of productivity of the shallow-water stocks but also has reduced the productivity, and possibly the resilience, of the already slow-growing deep-water species.


2017 ◽  
Vol 17 (24) ◽  
pp. 15069-15093 ◽  
Author(s):  
Elizabeth C. Weatherhead ◽  
Jerald Harder ◽  
Eduardo A. Araujo-Pradere ◽  
Greg Bodeker ◽  
Jason M. English ◽  
...  

Abstract. Sensors on satellites provide unprecedented understanding of the Earth's climate system by measuring incoming solar radiation, as well as both passive and active observations of the entire Earth with outstanding spatial and temporal coverage. A common challenge with satellite observations is to quantify their ability to provide well-calibrated, long-term, stable records of the parameters they measure. Ground-based intercomparisons offer some insight, while reference observations and internal calibrations give further assistance for understanding long-term stability. A valuable tool for evaluating and developing long-term records from satellites is the examination of data from overlapping satellite missions. This paper addresses how the length of overlap affects the ability to identify an offset or a drift in the overlap of data between two sensors. Ozone and temperature data sets are used as examples showing that overlap data can differ by latitude and can change over time. New results are presented for the general case of sensor overlap by using Solar Radiation and Climate Experiment (SORCE) Spectral Irradiance Monitor (SIM) and Solar Stellar Irradiance Comparison Experiment (SOLSTICE) solar irradiance data as an example. To achieve a 1 % uncertainty in estimating the offset for these two instruments' measurement of the Mg II core (280 nm) requires approximately 5 months of overlap. For relative drift to be identified within 0.1 % yr−1 uncertainty (0.00008 W m−2 nm−1 yr−1), the overlap for these two satellites would need to be 2.5 years. Additional overlap of satellite measurements is needed if, as is the case for solar monitoring, unexpected jumps occur adding uncertainty to both offsets and drifts; the additional length of time needed to account for a single jump in the overlap data may be as large as 50 % of the original overlap period in order to achieve the same desired confidence in the stability of the merged data set. Results presented here are directly applicable to satellite Earth observations. Approaches for Earth observations offer additional challenges due to the complexity of the observations, but Earth observations may also benefit from ancillary observations taken from ground-based and in situ sources. Difficult choices need to be made when monitoring approaches are considered; we outline some attempts at optimizing networks based on economic principles. The careful evaluation of monitoring overlap is important to the appropriate application of observational resources and to the usefulness of current and future observations.


2020 ◽  
Vol 13 (2) ◽  
pp. 789-819 ◽  
Author(s):  
Maximilian Reuter ◽  
Michael Buchwitz ◽  
Oliver Schneising ◽  
Stefan Noël ◽  
Heinrich Bovensmann ◽  
...  

Abstract. Satellite retrievals of column-averaged dry-air mole fractions of carbon dioxide (CO2) and methane (CH4), denoted XCO2 and XCH4, respectively, have been used in recent years to obtain information on natural and anthropogenic sources and sinks and for other applications such as comparisons with climate models. Here we present new data sets based on merging several individual satellite data products in order to generate consistent long-term climate data records (CDRs) of these two Essential Climate Variables (ECVs). These ECV CDRs, which cover the time period 2003–2018, have been generated using an ensemble of data products from the satellite sensors SCIAMACHY/ENVISAT and TANSO-FTS/GOSAT and (for XCO2) for the first time also including data from the Orbiting Carbon Observatory 2 (OCO-2) satellite. Two types of products have been generated: (i) Level 2 (L2) products generated with the latest version of the ensemble median algorithm (EMMA) and (ii) Level 3 (L3) products obtained by gridding the corresponding L2 EMMA products to obtain a monthly 5∘×5∘ data product in Obs4MIPs (Observations for Model Intercomparisons Project) format. The L2 products consist of daily NetCDF (Network Common Data Form) files, which contain in addition to the main parameters, i.e., XCO2 or XCH4, corresponding uncertainty estimates for random and potential systematic uncertainties and the averaging kernel for each single (quality-filtered) satellite observation. We describe the algorithms used to generate these data products and present quality assessment results based on comparisons with Total Carbon Column Observing Network (TCCON) ground-based retrievals. We found that the XCO2 Level 2 data set at the TCCON validation sites can be characterized by the following figures of merit (the corresponding values for the Level 3 product are listed in brackets) – single-observation random error (1σ): 1.29 ppm (monthly: 1.18 ppm); global bias: 0.20 ppm (0.18 ppm); and spatiotemporal bias or relative accuracy (1σ): 0.66 ppm (0.70 ppm). The corresponding values for the XCH4 products are single-observation random error (1σ): 17.4 ppb (monthly: 8.7 ppb); global bias: −2.0 ppb (−2.9 ppb); and spatiotemporal bias (1σ): 5.0 ppb (4.9 ppb). It has also been found that the data products exhibit very good long-term stability as no significant long-term bias trend has been identified. The new data sets have also been used to derive annual XCO2 and XCH4 growth rates, which are in reasonable to good agreement with growth rates from the National Oceanic and Atmospheric Administration (NOAA) based on marine surface observations. The presented ECV data sets are available (from early 2020 onwards) via the Climate Data Store (CDS, https://cds.climate.copernicus.eu/, last access: 10 January 2020) of the Copernicus Climate Change Service (C3S, https://climate.copernicus.eu/, last access: 10 January 2020).


Sign in / Sign up

Export Citation Format

Share Document