scholarly journals Australasian Temperature Reconstructions Spanning the Last Millennium

2016 ◽  
Vol 29 (15) ◽  
pp. 5365-5392 ◽  
Author(s):  
Joëlle Gergis ◽  
Raphael Neukom ◽  
Ailie J. E. Gallant ◽  
David J. Karoly

Abstract Multiproxy warm season (September–February) temperature reconstructions are presented for the combined land–ocean region of Australasia (0°–50°S, 110°E–180°) covering 1000–2001. Using between 2 (R2) and 28 (R28) paleoclimate records, four 1000-member ensemble reconstructions of regional temperature are developed using four statistical methods: principal component regression (PCR), composite plus scale (CPS), Bayesian hierarchical models (LNA), and pairwise comparison (PaiCo). The reconstructions are then compared with a three-member ensemble of GISS-E2-R climate model simulations and independent paleoclimate records. Decadal fluctuations in Australasian temperatures are remarkably similar between the four reconstruction methods. There are, however, differences in the amplitude of temperature variations between the different statistical methods and proxy networks. When the R28 network is used, the warmest 30-yr periods occur after 1950 in 77% of ensemble members over all methods. However, reconstructions based on only the longest records (R2 and R3 networks) indicate that single 30- and 10-yr periods of similar or slightly higher temperatures than in the late twentieth century may have occurred during the first half of the millennium. Regardless, the most recent instrumental temperatures (1985–2014) are above the 90th percentile of all 12 reconstruction ensembles (four reconstruction methods based on three proxy networks—R28, R3, and R2). The reconstructed twentieth-century warming cannot be explained by natural variability alone using GISS-E2-R. In this climate model, anthropogenic forcing is required to produce the rate and magnitude of post-1950 warming observed in the Australasian region. These paleoclimate results are consistent with other studies that attribute the post-1950 warming in Australian temperature records to increases in atmospheric greenhouse gas concentrations.

2019 ◽  
Vol 32 (17) ◽  
pp. 5417-5436 ◽  
Author(s):  
Benjamin I. Cook ◽  
Richard Seager ◽  
A. Park Williams ◽  
Michael J. Puma ◽  
Sonali McDermid ◽  
...  

AbstractIn the mid-twentieth century (1948–57), North America experienced a severe drought forced by cold tropical Pacific sea surface temperatures (SSTs). If these SSTs recurred, it would likely cause another drought, but in a world substantially warmer than the one in which the original event took place. We use a 20-member ensemble of the GISS climate model to investigate the drought impacts of a repetition of the mid-twentieth-century SST anomalies in a significantly warmer world. Using observed SSTs and mid-twentieth-century forcings (Hist-DRGHT), the ensemble reproduces the observed precipitation deficits during the cold season (October–March) across the Southwest, southern plains, and Mexico and during the warm season (April–September) in the southern plains and the Southeast. Under analogous SST forcing and enhanced warming (Fut-DRGHT, ≈3 K above preindustrial), cold season precipitation deficits are ameliorated in the Southwest and southern plains and intensified in the Southeast, whereas during the warm season precipitation deficits are enhanced across North America. This occurs primarily from greenhouse gas–forced trends in mean precipitation, rather than changes in SST teleconnections. Cold season runoff deficits in Fut-DRGHT are significantly amplified over the Southeast, but otherwise similar to Hist-DRGHT over the Southwest and southern plains. In the warm season, however, runoff and soil moisture deficits during Fut-DRGHT are significantly amplified across the southern United States, a consequence of enhanced precipitation deficits and increased evaporative losses due to warming. Our study highlights how internal variability and greenhouse gas–forced trends in hydroclimate are likely to interact over North America, including how changes in both precipitation and evaporative demand will affect future drought.


2015 ◽  
Vol 11 (4) ◽  
pp. 3853-3895 ◽  
Author(s):  
R. Batehup ◽  
S. McGregor ◽  
A. J. E. Gallant

Abstract. Reconstructions of the El Niño-Southern Oscillation (ENSO) ideally require high-quality, annually-resolved and long-running paleoclimate proxy records in the eastern tropical Pacific Ocean, located in ENSO's centre-of-action. However, to date, the paleoclimate records that have been extracted in the region are short or temporally and spatially sporadic, limiting the information that can be provided by these reconstructions. Consequently, most ENSO reconstructions exploit the downstream influences of ENSO on remote locations, known as teleconnections, where longer records from paleoclimate proxies exist. However, using teleconnections to reconstruct ENSO relies on the assumption that the relationship between ENSO and the remote location is stationary in time. Increasing evidence from observations and climate models suggests that some teleconnections are, in fact, non-stationary, potentially threatening the validity of those paleoclimate reconstructions that exploit teleconnections. This study examines the implications of non-stationary teleconnections on modern multi-proxy reconstructions of ENSO. The sensitivity of the reconstructions to non-stationary teleconnections were tested using a suite of idealized pseudoproxy experiments that employed output from a fully coupled global climate model. Reconstructions of the variance in the Niño 3.4 index, representing ENSO variability, were generated using four different methods to which surface temperature data from the GFDL CM2.1 was applied as a pseudoproxy. As well as sensitivity of the reconstruction to the method, the experiments tested the sensitivity of the reconstruction to the number of non-stationary pseudoproxies and the location of these proxies. ENSO reconstructions in the pseudoproxy experiments were not sensitive to non-stationary teleconnections when global, uniformly-spaced networks of a minimum of approximately 20 proxies were employed. Neglecting proxies from ENSO's center-of-action still produced skillful reconstructions, but the chance of generating a skillful reconstruction decreased. Reconstruction methods that utilized raw time series were the most sensitive to non-stationary teleconnections, while calculating the running variance of pseudoproxies first, appeared to improve the robustness of the resulting reconstructions. The results suggest that caution should be taken when developing reconstructions using proxies from a single teleconnected region, or those that use less than 20 source proxies.


2018 ◽  
Author(s):  
Simon Michel ◽  
Didier Swingedouw ◽  
Marie Chavent ◽  
Pablo Ortega ◽  
Juliette Mignot ◽  
...  

Abstract. Modes of climate variability strongly impact our climate and thus human society. Nevertheless, their statistical properties remain poorly known due to the short time frame of instrumental measurements. Reconstructing these modes further back in time using statistical learning methods applied to proxy records is a useful way to improve our understanding of their behaviours and meteorological impacts. For doing so, several statistical reconstruction methods exist, among which the Principal Component Regression is one of the most widely used. Additional predictive, and then reconstructive, statistical methods have been developed recently, following the advent of big data. Here, we provide to the climate community a multi-statistical toolbox, based on four statistical learning methods and cross validation algorithms, that enables systematic reconstruction of any climate mode of variability as long as there are proxy records that overlap in time with the observed variations of the considered mode. The efficiency of the methods can vary, depending on the statistical properties of the mode and the learning set, thereby allowing to assess sensitivity related to the reconstruction techniques. This toolbox is modular in the sense that it allows different inputs like the proxy database or the chosen variability mode. As an example, the toolbox is here applied to the reconstruction of the North Atlantic Oscillation (NAO) by using Pages 2K database. In order to identify the most reliable reconstruction among those given by the different methods, we also investigate the sensitivity to the methodological setup to other properties such as the number and the nature of the proxy records used as predictors or the reconstruction period targeted. The best reconstruction of the NAO that we thus obtain shows significant correlation with former reconstructions, but exhibits better validation scores.


1992 ◽  
Vol 46 (12) ◽  
pp. 1780-1784 ◽  
Author(s):  
Helle Holst

This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do not provide robust results in the spirit of calibration equations which can last for long periods. A way of obtaining this property is by smoothing and differentiating the data. These techniques are considered, and it is shown how they fit into the treated description.


2019 ◽  
Vol 15 (2) ◽  
pp. 661-684 ◽  
Author(s):  
François Klein ◽  
Nerilie J. Abram ◽  
Mark A. J. Curran ◽  
Hugues Goosse ◽  
Sentia Goursaud ◽  
...  

Abstract. The Antarctic temperature changes over the past millennia remain more uncertain than in many other continental regions. This has several origins: (1) the number of high-resolution ice cores is small, in particular on the East Antarctic plateau and in some coastal areas in East Antarctica; (2) the short and spatially sparse instrumental records limit the calibration period for reconstructions and the assessment of the methodologies; (3) the link between isotope records from ice cores and local climate is usually complex and dependent on the spatial scales and timescales investigated. Here, we use climate model results, pseudoproxy experiments and data assimilation experiments to assess the potential for reconstructing the Antarctic temperature over the last 2 millennia based on a new database of stable oxygen isotopes in ice cores compiled in the framework of Antarctica2k (Stenni et al., 2017). The well-known covariance between δ18O and temperature is reproduced in the two isotope-enabled models used (ECHAM5/MPI-OM and ECHAM5-wiso), but is generally weak over the different Antarctic regions, limiting the skill of the reconstructions. Furthermore, the strength of the link displays large variations over the past millennium, further affecting the potential skill of temperature reconstructions based on statistical methods which rely on the assumption that the last decades are a good estimate for longer temperature reconstructions. Using a data assimilation technique allows, in theory, for changes in the δ18O–temperature link through time and space to be taken into account. Pseudoproxy experiments confirm the benefits of using data assimilation methods instead of statistical methods that provide reconstructions with unrealistic variances in some Antarctic subregions. They also confirm that the relatively weak link between both variables leads to a limited potential for reconstructing temperature based on δ18O. However, the reconstruction skill is higher and more uniform among reconstruction methods when the reconstruction target is the Antarctic as a whole rather than smaller Antarctic subregions. This consistency between the methods at the large scale is also observed when reconstructing temperature based on the real δ18O regional composites of Stenni et al. (2017). In this case, temperature reconstructions based on data assimilation confirm the long-term cooling over Antarctica during the last millennium, and the later onset of anthropogenic warming compared with the simulations without data assimilation, which is especially visible in West Antarctica. Data assimilation also allows for models and direct observations to be reconciled by reproducing the east–west contrast in the recent temperature trends. This recent warming pattern is likely mostly driven by internal variability given the large spread of individual Paleoclimate Modelling Intercomparison Project (PMIP)/Coupled Model Intercomparison Project (CMIP) model realizations in simulating it. As in the pseudoproxy framework, the reconstruction methods perform differently at the subregional scale, especially in terms of the variance of the time series produced. While the potential benefits of using a data assimilation method instead of a statistical method have been highlighted in a pseudoproxy framework, the instrumental series are too short to confirm this in a realistic setup.


2019 ◽  
Vol 8 (1) ◽  
Author(s):  
Khairunnisa Khairunnisa ◽  
Rizka Pitri ◽  
Victor P Butar-Butar ◽  
Agus M Soleh

This research used CFSRv2 data as output data general circulation model. CFSRv2 involves some variables data with high correlation, so in this research is using principal component regression (PCR) and partial least square (PLS) to solve the multicollinearity occurring in CFSRv2 data. This research aims to determine the best model between PCR and PLS to estimate rainfall at Bandung geophysical station, Bogor climatology station, Citeko meteorological station, and Jatiwangi meteorological station by comparing RMSEP value and correlation value. Size used was 3×3, 4×4, 5×5, 6×6, 7×7, 8×8, 9×9, and 11×11 that was located between (-40) N - (-90) S and 1050 E -1100 E with a grid size of 0.5×0.5 The PLS model was the best model used in stastistical downscaling in this research than PCR model because of the PLS model obtained the lower RMSEP value and the higher correlation value. The best domain and RMSEP value for Bandung geophysical station, Bogor climatology station, Citeko meteorological station, and Jatiwangi meteorological station is 9 × 9 with 100.06, 6 × 6 with 194.3, 8 × 8 with 117.6, and 6 × 6 with 108.2, respectively.


2020 ◽  
Author(s):  
Luis Anunciacao ◽  
janet squires ◽  
J. Landeira-Fernandez

One of the main activities in psychometrics is to analyze the internal structure of a test. Multivariate statistical methods, including Exploratory Factor analysis (EFA) and Principal Component Analysis (PCA) are frequently used to do this, but the growth of Network Analysis (NA) places this method as a promising candidate. The results obtained by these methods are of valuable interest, as they not only produce evidence to explore if the test is measuring its intended construct, but also to deal with the substantive theory that motivated the test development. However, these different statistical methods come up with different answers, providing the basis for different analytical and theoretical strategies when one needs to choose a solution. In this study, we took advantage of a large volume of published data (n = 22,331) obtained by the Ages and Stages Questionnaire Social-Emotional (ASQ:SE), and formed a subset of 500 children to present and discuss alternative psychometric solutions to its internal structure, and also to its subjacent theory. The analyses were based on a polychoric matrix, the number of factors to retain followed several well-known rules of thumb, and a wide range of exploratory methods was fitted to the data, including EFA, PCA, and NA. The statistical outcomes were divergent, varying from 1 to 6 domains, allowing a flexible interpretation of the results. We argue that the use of statistical methods in the absence of a well-grounded psychological theory has limited applications, despite its appeal. All data and codes are available at https://osf.io/z6gwv/.


2007 ◽  
Vol 90 (2) ◽  
pp. 391-404 ◽  
Author(s):  
Fadia H Metwally ◽  
Yasser S El-Saharty ◽  
Mohamed Refaat ◽  
Sonia Z El-Khateeb

Abstract New selective, precise, and accurate methods are described for the determination of a ternary mixture containing drotaverine hydrochloride (I), caffeine (II), and paracetamol (III). The first method uses the first (D1) and third (D3) derivative spectrophotometry at 331 and 315 nm for the determination of (I) and (III), respectively, without interference from (II). The second method depends on the simultaneous use of the first derivative of the ratio spectra (DD1) with measurement at 312.4 nm for determination of (I) using the spectrum of 40 μg/mL (III) as a divisor or measurement at 286.4 and 304 nm after using the spectrum of 4 μg/mL (I) as a divisor for the determination of (II) and (III), respectively. In the third method, the predictive abilities of the classical least-squares, principal component regression, and partial least-squares were examined for the simultaneous determination of the ternary mixture. The last method depends on thin-layer chromatography-densitometry after separation of the mixture on silica gel plates using ethyl acetatechloroformmethanol (16 + 3 + 1, v/v/v) as the mobile phase. The spots were scanned at 281, 272, and 248 nm for the determination of (I), (II), and (III), respectively. Regression analysis showed good correlation in the selected ranges with excellent percentage recoveries. The chemical variables affecting the analytical performance of the methodology were studied and optimized. The methods showed no significant interferences from excipients. Intraday and interday assay precision and accuracy values were within regulatory limits. The suggested procedures were checked using laboratory-prepared mixtures and were successfully applied for the analysis of their pharmaceutical preparations. The validity of the proposed methods was further assessed by applying a standard addition technique. The results obtained by applying the proposed methods were statistically analyzed and compared with those obtained by the manufacturer's method.


2021 ◽  
pp. 1471082X2110229
Author(s):  
D. Stasinopoulos Mikis ◽  
A. Rigby Robert ◽  
Georgikopoulos Nikolaos ◽  
De Bastiani Fernanda

A solution to the problem of having to deal with a large number of interrelated explanatory variables within a generalized additive model for location, scale and shape (GAMLSS) is given here using as an example the Greek–German government bond yield spreads from 25 April 2005 to 31 March 2010. Those were turbulent financial years, and in order to capture the spreads behaviour, a model has to be able to deal with the complex nature of the financial indicators used to predict the spreads. Fitting a model, using principal components regression of both main and first order interaction terms, for all the parameters of the assumed distribution of the response variable seems to produce promising results.


Sign in / Sign up

Export Citation Format

Share Document