scholarly journals Methodological aspects of a pattern-scaling approach to produce global fields of monthly means of daily maximum and minimum temperature

2013 ◽  
Vol 6 (3) ◽  
pp. 4833-4882
Author(s):  
S. Kremser ◽  
G. E. Bodeker ◽  
J. Lewis

Abstract. A Climate Pattern-Scaling Model (CPSM) that simulates global patterns of climate change, for a prescribed emissions scenario, is described. A CPSM works by quantitatively establishing the statistical relationship between a climate variable at a specific location (e.g. daily maximum surface temperature, Tmax) and one or more predictor time series (e.g. global mean surface temperature, Tglobal) – referred to as the "training" of the CPSM. This training uses a regression model to derive fit-coefficients that describe the statistical relationship between the predictor time series and the target climate variable time series. Once that relationship has been determined, and given the predictor time series for any greenhouse gas (GHG) emissions scenario, the change in the climate variable of interest can be reconstructed – referred to as the "application" of the CPSM. The advantage of using a CPSM rather than a typical atmosphere-ocean global climate model (AOGCM) is that the predictor time series required by the CPSM can usually be generated quickly using a simple climate model (SCM) for any prescribed GHG emissions scenario and then applied to generate global fields of the climate variable of interest. The training can be performed either on historical measurements or on output from an AOGCM. Using model output from 21st century simulations has the advantage that the climate change signal is more pronounced than in historical data and therefore a more robust statistical relationship is obtained. The disadvantage of using AOGCM output is that the CPSM training might be compromised by any AOGCM inadequacies. For the purposes of exploring the various methodological aspects of the CPSM approach, AOGCM output was used in this study to train the CPSM. These investigations of the CPSM methodology focus on monthly mean fields of daily temperature extremes (Tmax and Tmin). Key conclusions are: (1) overall, the CPSM trained on simulations based on the Representative Concentration Pathway (RCP) 8.5 emissions scenario is able to reproduce AOGCM simulations of Tmax and Tmin based on predictor time series from an RCP 4.5 emissions scenario; (2) access to hemisphere average land and ocean temperatures as predictors improves the variance that can be explained, particularly over the oceans; (3) regression model fit-coefficients derived from individual simulations based on the RCP 2.6, 4.5 and 8.5 emissions scenarios agree well over most regions of the globe (the Arctic is the exception); (4) training the CPSM on concatenated time series from an ensemble of simulations does not result in fit-coefficients that explain significantly more of the variance than an approach that weights results based on single simulation fits; and (5) the inclusion of a linear time dependence in the regression model fit-coefficients improves the variance explained, primarily over the oceans.

2014 ◽  
Vol 7 (1) ◽  
pp. 249-266 ◽  
Author(s):  
S. Kremser ◽  
G. E. Bodeker ◽  
J. Lewis

Abstract. A Climate Pattern-Scaling Model (CPSM) that simulates global patterns of climate change, for a prescribed emissions scenario, is described. A CPSM works by quantitatively establishing the statistical relationship between a climate variable at a specific location (e.g. daily maximum surface temperature, Tmax) and one or more predictor time series (e.g. global mean surface temperature, Tglobal) – referred to as the "training" of the CPSM. This training uses a regression model to derive fit coefficients that describe the statistical relationship between the predictor time series and the target climate variable time series. Once that relationship has been determined, and given the predictor time series for any greenhouse gas (GHG) emissions scenario, the change in the climate variable of interest can be reconstructed – referred to as the "application" of the CPSM. The advantage of using a CPSM rather than a typical atmosphere–ocean global climate model (AOGCM) is that the predictor time series required by the CPSM can usually be generated quickly using a simple climate model (SCM) for any prescribed GHG emissions scenario and then applied to generate global fields of the climate variable of interest. The training can be performed either on historical measurements or on output from an AOGCM. Using model output from 21st century simulations has the advantage that the climate change signal is more pronounced than in historical data and therefore a more robust statistical relationship is obtained. The disadvantage of using AOGCM output is that the CPSM training might be compromised by any AOGCM inadequacies. For the purposes of exploring the various methodological aspects of the CPSM approach, AOGCM output was used in this study to train the CPSM. These investigations of the CPSM methodology focus on monthly mean fields of daily temperature extremes (Tmax and Tmin). The methodological aspects of the CPSM explored in this study include (1) investigation of the advantage gained in having five predictor time series over having only one predictor time series, (2) investigation of the time dependence of the fit coefficients and (3) investigation of the dependence of the fit coefficients on GHG emissions scenario. Key conclusions are (1) overall, the CPSM trained on simulations based on the Representative Concentration Pathway (RCP) 8.5 emissions scenario is able to reproduce AOGCM simulations of Tmax and Tmin based on predictor time series from an RCP 4.5 emissions scenario; (2) access to hemisphere average land and ocean temperatures as predictors improves the variance that can be explained, particularly over the oceans; (3) regression model fit coefficients derived from individual simulations based on the RCP 2.6, 4.5 and 8.5 emissions scenarios agree well over most regions of the globe (the Arctic is the exception); (4) training the CPSM on concatenated time series from an ensemble of simulations does not result in fit coefficients that explain significantly more of the variance than an approach that weights results based on single simulation fits; and (5) the inclusion of a linear time dependence in the regression model fit coefficients improves the variance explained, primarily over the oceans.


2016 ◽  
Vol 55 (3) ◽  
pp. 811-826 ◽  
Author(s):  
John R. Christy ◽  
Richard T. McNider

AbstractThree time series of average summer [June–August (JJA)] daily maximum temperature (TMax) are developed for three interior regions of Alabama from stations with varying periods of record and unknown inhomogeneities. The time frame is 1883–2014. Inhomogeneities for each station’s time series are determined from pairwise comparisons with no use of station metadata other than location. The time series for the three adjoining regions are constructed separately and are then combined as a whole assuming trends over 132 yr will have little spatial variation either intraregionally or interregionally for these spatial scales. Varying the parameters of the construction methodology creates 333 time series with a central trend value based on the largest group of stations of −0.07°C decade−1 with a best-guess estimate of measurement uncertainty from −0.12° to −0.02°C decade−1. This best-guess result is insignificantly different (0.01°C decade−1) from a similar regional calculation using NOAA’s divisional dataset based on daily data from the Global Historical Climatology Network (nClimDiv) beginning in 1895. Summer TMax is a better proxy, when compared with daily minimum temperature and thus daily average temperature, for the deeper tropospheric temperature (where the enhanced greenhouse signal is maximized) as a result of afternoon convective mixing. Thus, TMax more closely represents a critical climate parameter: atmospheric heat content. Comparison between JJA TMax and deep tropospheric temperature anomalies indicates modest agreement (r2 = 0.51) for interior Alabama while agreement for the conterminous United States as given by TMax from the nClimDiv dataset is much better (r2 = 0.86). Seventy-seven CMIP5 climate model runs are examined for Alabama and indicate no skill at replicating long-term temperature and precipitation changes since 1895.


2012 ◽  
Vol 5 (2) ◽  
pp. 999-1033 ◽  
Author(s):  
G. E. Bodeker ◽  
B. Hassler ◽  
P. J. Young ◽  
R. W. Portmann

Abstract. High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These "Tier 0" ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced ~1 km apart (878.4 hPa to 0.046 hPa). The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N+1th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa) was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different "Tier 1" databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric chemistry interactively.


2013 ◽  
Vol 5 (1) ◽  
pp. 31-43 ◽  
Author(s):  
G. E. Bodeker ◽  
B. Hassler ◽  
P. J. Young ◽  
R. W. Portmann

Abstract. High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These ''Tier 0'' ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced ~ 1 km apart (878.4 hPa to 0.046 hPa). The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N + 1-th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa) was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different ''Tier 1'' databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric chemistry interactively.


2021 ◽  
Vol 11 (14) ◽  
pp. 6594
Author(s):  
Yu-Chia Hsu

The interdisciplinary nature of sports and the presence of various systemic and non-systemic factors introduce challenges in predicting sports match outcomes using a single disciplinary approach. In contrast to previous studies that use sports performance metrics and statistical models, this study is the first to apply a deep learning approach in financial time series modeling to predict sports match outcomes. The proposed approach has two main components: a convolutional neural network (CNN) classifier for implicit pattern recognition and a logistic regression model for match outcome judgment. First, the raw data used in the prediction are derived from the betting market odds and actual scores of each game, which are transformed into sports candlesticks. Second, CNN is used to classify the candlesticks time series on a graphical basis. To this end, the original 1D time series are encoded into 2D matrix images using Gramian angular field and are then fed into the CNN classifier. In this way, the winning probability of each matchup team can be derived based on historically implied behavioral patterns. Third, to further consider the differences between strong and weak teams, the CNN classifier adjusts the probability of winning the match by using the logistic regression model and then makes a final judgment regarding the match outcome. We empirically test this approach using 18,944 National Football League game data spanning 32 years and find that using the individual historical data of each team in the CNN classifier for pattern recognition is better than using the data of all teams. The CNN in conjunction with the logistic regression judgment model outperforms the CNN in conjunction with SVM, Naïve Bayes, Adaboost, J48, and random forest, and its accuracy surpasses that of betting market prediction.


2012 ◽  
Vol 25 (23) ◽  
pp. 8238-8258 ◽  
Author(s):  
Johannes Mülmenstädt ◽  
Dan Lubin ◽  
Lynn M. Russell ◽  
Andrew M. Vogelmann

Abstract Long time series of Arctic atmospheric measurements are assembled into meteorological categories that can serve as test cases for climate model evaluation. The meteorological categories are established by applying an objective k-means clustering algorithm to 11 years of standard surface-meteorological observations collected from 1 January 2000 to 31 December 2010 at the North Slope of Alaska (NSA) site of the U.S. Department of Energy Atmospheric Radiation Measurement Program (ARM). Four meteorological categories emerge. These meteorological categories constitute the first classification by meteorological regime of a long time series of Arctic meteorological conditions. The synoptic-scale patterns associated with each category, which include well-known synoptic features such as the Aleutian low and Beaufort Sea high, are used to explain the conditions at the NSA site. Cloud properties, which are not used as inputs to the k-means clustering, are found to differ significantly between the regimes and are also well explained by the synoptic-scale influences in each regime. Since the data available at the ARM NSA site include a wealth of cloud observations, this classification is well suited for model–observation comparison studies. Each category comprises an ensemble of test cases covering a representative range in variables describing atmospheric structure, moisture content, and cloud properties. This classification is offered as a complement to standard case-study evaluation of climate model parameterizations, in which models are compared against limited realizations of the Earth–atmosphere system (e.g., from detailed aircraft measurements).


2010 ◽  
Vol 19 (01) ◽  
pp. 107-121 ◽  
Author(s):  
JUAN CARLOS FIGUEROA GARCÍA ◽  
DUSKO KALENATIC ◽  
CESAR AMILCAR LÓPEZ BELLO

This paper presents a proposal based on an evolutionary algorithm for imputing missing observations in time series. A genetic algorithm based on the minimization of an error function derived from their autocorrelation function, mean, and variance is presented. All methodological aspects of the genetic structure are presented. An extended description of the design of the fitness function is provided. Four application examples are provided and solved by using the proposed method.


Sign in / Sign up

Export Citation Format

Share Document