scholarly journals Trend detection of atmospheric time series

Elem Sci Anth ◽  
2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Kai-Lan Chang ◽  
Martin G. Schultz ◽  
Xin Lan ◽  
Audra McClure-Begley ◽  
Irina Petropavlovskikh ◽  
...  

This paper is aimed at atmospheric scientists without formal training in statistical theory. Its goal is to (1) provide a critical review of the rationale for trend analysis of the time series typically encountered in the field of atmospheric chemistry, (2) describe a range of trend-detection methods, and (3) demonstrate effective means of conveying the results to a general audience. Trend detections in atmospheric chemical composition data are often challenged by a variety of sources of uncertainty, which often behave differently to other environmental phenomena such as temperature, precipitation rate, or stream flow, and may require specific methods depending on the science questions to be addressed. Some sources of uncertainty can be explicitly included in the model specification, such as autocorrelation and seasonality, but some inherent uncertainties are difficult to quantify, such as data heterogeneity and measurement uncertainty due to the combined effect of short and long term natural variability, instrumental stability, and aggregation of data from sparse sampling frequency. Failure to account for these uncertainties might result in an inappropriate inference of the trends and their estimation errors. On the other hand, the variation in extreme events might be interesting for different scientific questions, for example, the frequency of extremely high surface ozone events and their relevance to human health. In this study we aim to (1) review trend detection methods for addressing different levels of data complexity in different chemical species, (2) demonstrate that the incorporation of scientifically interpretable covariates can outperform pure numerical curve fitting techniques in terms of uncertainty reduction and improved predictability, (3) illustrate the study of trends based on extreme quantiles that can provide insight beyond standard mean or median based trend estimates, and (4) present an advanced method of quantifying regional trends based on the inter-site correlations of multisite data. All demonstrations are based on time series of observed trace gases relevant to atmospheric chemistry, but the methods can be applied to other environmental data sets.

2012 ◽  
Vol 25 (12) ◽  
pp. 4172-4183 ◽  
Author(s):  
Christian Franzke

Abstract This study investigates the significance of trends of four temperature time series—Central England Temperature (CET), Stockholm, Faraday-Vernadsky, and Alert. First the robustness and accuracy of various trend detection methods are examined: ordinary least squares, robust and generalized linear model regression, Ensemble Empirical Mode Decomposition (EEMD), and wavelets. It is found in tests with surrogate data that these trend detection methods are robust for nonlinear trends, superposed autocorrelated fluctuations, and non-Gaussian fluctuations. An analysis of the four temperature time series reveals evidence of long-range dependence (LRD) and nonlinear warming trends. The significance of these trends is tested against climate noise. Three different methods are used to generate climate noise: (i) a short-range-dependent autoregressive process of first order [AR(1)], (ii) an LRD model, and (iii) phase scrambling. It is found that the ability to distinguish the observed warming trend from stochastic trends depends on the model representing the background climate variability. Strong evidence is found of a significant warming trend at Faraday-Vernadsky that cannot be explained by any of the three null models. The authors find moderate evidence of warming trends for the Stockholm and CET time series that are significant against AR(1) and phase scrambling but not the LRD model. This suggests that the degree of significance of climate trends depends on the null model used to represent intrinsic climate variability. This study highlights that in statistical trend tests, more than just one simple null model of intrinsic climate variability should be used. This allows one to better gauge the degree of confidence to have in the significance of trends.


Atmosphere ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 347 ◽  
Author(s):  
Hanane Bougara ◽  
Kamila Baba Hamed ◽  
Christian Borgemeister ◽  
Bernhard Tischbein ◽  
Navneet Kumar

Northwest Algeria has experienced fluctuations in rainfall between the two decades 1940s and 1990s from positive to negative anomalies, which reflected a significant decline in rainfall during the mid-1970s. Therefore, further analyzing rainfall in this region is required for improving the strategies on water resource management. In this study, we complement previous studies by dealing with sub basins that were not previously addressed in Tafna basin (our study area located in Northwest Algeria), and by including additional statistical methods (Kruskal–Wallis test, Jonckheere-Terpstra test, and the Friedman test) that were not earlier reported on the large scale (Northwest Algeria). In order to analyse the homogeneity, trends, and stationarity in rainfall time series for nine rainfall stations over the period 1979–2011, we have used several statistical tests. The results showed an increasing trend for annual rainfall after the break detected in 2007 for Djbel Chouachi, Ouled Mimoun, Sidi Benkhala stations using Hubert, Pettitt, and Buishand tests. The Lee and Heghinian test has detected a break at the same year in 2007 for all stations except Sebdou, Beni Bahdel, and Hennaya stations, which have a break date in 1980. We have confirmed this increasing trend for rainfall with other trend detection methods such as Mann Kendall and Sen’s method that highlighted an upward trend for all the stations in the autumn season, which is mainly due to an increase in rainfall in September and October. On a monthly scale, the date of rupture is different from one station to another because the time series are not homogeneous. In addition, we have applied three tests enabling further results: (i) the Jonckheere-Terpstra test has detected an upward trend for two stations (Khemis and Hennaya), (ii) Friedman test has indicated the difference between the mean rank again with Khemis and Hennaya stations and the Merbeh station, (iii) according to the Kruskal-Wallis test, there have been no variance detected between all the rainfall stations. The increasing trend in rainfall may lead to a rise in stream flow and enhance potential floods risks in low-lying regions of the study area.


2019 ◽  
Vol 76 (7) ◽  
pp. 2060-2069 ◽  
Author(s):  
Sean Hardison ◽  
Charles T Perretti ◽  
Geret S DePiper ◽  
Andrew Beet

Abstract The identification of trends in ecosystem indicators has become a core component of ecosystem approaches to resource management, although oftentimes assumptions of statistical models are not properly accounted for in the reporting process. To explore the limitations of trend analysis of short times series, we applied three common methods of trend detection, including a generalized least squares model selection approach, the Mann–Kendall test, and Mann–Kendall test with trend-free pre-whitening to simulated time series of varying trend and autocorrelation strengths. Our results suggest that the ability to detect trends in time series is hampered by the influence of autocorrelated residuals in short series lengths. While it is known that tests designed to account for autocorrelation will approach nominal rejection rates as series lengths increase, the results of this study indicate biased rejection rates in the presence of even weak autocorrelation for series lengths often encountered in indicators developed for ecosystem-level reporting (N = 10, 20, 30). This work has broad implications for ecosystem-level reporting, where indicator time series are often limited in length, maintain a variety of error structures, and are typically assessed using a single statistical method applied uniformly across all time series.


2021 ◽  
Vol 13 (16) ◽  
pp. 3069
Author(s):  
Yadong Liu ◽  
Junhwan Kim ◽  
David H. Fleisher ◽  
Kwang Soo Kim

Seasonal forecasts of crop yield are important components for agricultural policy decisions and farmer planning. A wide range of input data are often needed to forecast crop yield in a region where sophisticated approaches such as machine learning and process-based models are used. This requires considerable effort for data preparation in addition to identifying data sources. Here, we propose a simpler approach called the Analogy Based Crop-yield (ABC) forecast scheme to make timely and accurate prediction of regional crop yield using a minimum set of inputs. In the ABC method, a growing season from a prior long-term period, e.g., 10 years, is first identified as analogous to the current season by the use of a similarity index based on the time series leaf area index (LAI) patterns. Crop yield in the given growing season is then forecasted using the weighted yield average reported in the analogous seasons for the area of interest. The ABC approach was used to predict corn and soybean yields in the Midwestern U.S. at the county level for the period of 2017–2019. The MOD15A2H, which is a satellite data product for LAI, was used to compile inputs. The mean absolute percentage error (MAPE) of crop yield forecasts was <10% for corn and soybean in each growing season when the time series of LAI from the day of year 89 to 209 was used as inputs to the ABC approach. The prediction error for the ABC approach was comparable to results from a deep neural network model that relied on soil and weather data as well as satellite data in a previous study. These results indicate that the ABC approach allowed for crop yield forecast with a lead-time of at least two months before harvest. In particular, the ABC scheme would be useful for regions where crop yield forecasts are limited by availability of reliable environmental data.


2021 ◽  
Vol 13 (15) ◽  
pp. 2869
Author(s):  
MohammadAli Hemati ◽  
Mahdi Hasanlou ◽  
Masoud Mahdianpari ◽  
Fariba Mohammadimanesh

With uninterrupted space-based data collection since 1972, Landsat plays a key role in systematic monitoring of the Earth’s surface, enabled by an extensive and free, radiometrically consistent, global archive of imagery. Governments and international organizations rely on Landsat time series for monitoring and deriving a systematic understanding of the dynamics of the Earth’s surface at a spatial scale relevant to management, scientific inquiry, and policy development. In this study, we identify trends in Landsat-informed change detection studies by surveying 50 years of published applications, processing, and change detection methods. Specifically, a representative database was created resulting in 490 relevant journal articles derived from the Web of Science and Scopus. From these articles, we provide a review of recent developments, opportunities, and trends in Landsat change detection studies. The impact of the Landsat free and open data policy in 2008 is evident in the literature as a turning point in the number and nature of change detection studies. Based upon the search terms used and articles included, average number of Landsat images used in studies increased from 10 images before 2008 to 100,000 images in 2020. The 2008 opening of the Landsat archive resulted in a marked increase in the number of images used per study, typically providing the basis for the other trends in evidence. These key trends include an increase in automated processing, use of analysis-ready data (especially those with atmospheric correction), and use of cloud computing platforms, all over increasing large areas. The nature of change methods has evolved from representative bi-temporal pairs to time series of images capturing dynamics and trends, capable of revealing both gradual and abrupt changes. The result also revealed a greater use of nonparametric classifiers for Landsat change detection analysis. Landsat-9, to be launched in September 2021, in combination with the continued operation of Landsat-8 and integration with Sentinel-2, enhances opportunities for improved monitoring of change over increasingly larger areas with greater intra- and interannual frequency.


2012 ◽  
Vol 117 (D21) ◽  
pp. n/a-n/a ◽  
Author(s):  
Meiyun Lin ◽  
Arlene M. Fiore ◽  
Owen R. Cooper ◽  
Larry W. Horowitz ◽  
Andrew O. Langford ◽  
...  

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Hitoshi Iuchi ◽  
Michiaki Hamada

Abstract Time-course experiments using parallel sequencers have the potential to uncover gradual changes in cells over time that cannot be observed in a two-point comparison. An essential step in time-series data analysis is the identification of temporal differentially expressed genes (TEGs) under two conditions (e.g. control versus case). Model-based approaches, which are typical TEG detection methods, often set one parameter (e.g. degree or degree of freedom) for one dataset. This approach risks modeling of linearly increasing genes with higher-order functions, or fitting of cyclic gene expression with linear functions, thereby leading to false positives/negatives. Here, we present a Jonckheere–Terpstra–Kendall (JTK)-based non-parametric algorithm for TEG detection. Benchmarks, using simulation data, show that the JTK-based approach outperforms existing methods, especially in long time-series experiments. Additionally, application of JTK in the analysis of time-series RNA-seq data from seven tissue types, across developmental stages in mouse and rat, suggested that the wave pattern contributes to the TEG identification of JTK, not the difference in expression levels. This result suggests that JTK is a suitable algorithm when focusing on expression patterns over time rather than expression levels, such as comparisons between different species. These results show that JTK is an excellent candidate for TEG detection.


Hydrology ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 63 ◽  
Author(s):  
Benjamin Nelsen ◽  
D. Williams ◽  
Gustavious Williams ◽  
Candace Berrett

Complete and accurate data are necessary for analyzing and understanding trends in time-series datasets; however, many of the available time-series datasets have gaps that affect the analysis, especially in the earth sciences. As most available data have missing values, researchers use various interpolation methods or ad hoc approaches to data imputation. Since the analysis based on inaccurate data can lead to inaccurate conclusions, more accurate data imputation methods can provide accurate analysis. We present a spatial-temporal data imputation method using Empirical Mode Decomposition (EMD) based on spatial correlations. We call this method EMD-spatial data imputation or EMD-SDI. Though this method is applicable to other time-series data sets, here we demonstrate the method using temperature data. The EMD algorithm decomposes data into periodic components called intrinsic mode functions (IMF) and exactly reconstructs the original signal by summing these IMFs. EMD-SDI initially decomposes the data from the target station and other stations in the region into IMFs. EMD-SDI evaluates each IMF from the target station in turn and selects the IMF from other stations in the region with periodic behavior most correlated to target IMF. EMD-SDI then replaces a section of missing data in the target station IMF with the section from the most closely correlated IMF from the regional stations. We found that EMD-SDI selects the IMFs used for reconstruction from different stations throughout the region, not necessarily the station closest in the geographic sense. EMD-SDI accurately filled data gaps from 3 months to 5 years in length in our tests and favorably compares to a simple temporal method. EMD-SDI leverages regional correlation and the fact that different stations can be subject to different periodic behaviors. In addition to data imputation, the EMD-SDI method provides IMFs that can be used to better understand regional correlations and processes.


2017 ◽  
Author(s):  
Ben Newsome ◽  
Mat Evans

Abstract. Chemical rate constants determine the composition of the atmosphere and how this composition has changed over time. They are central to our understanding of climate change and air quality degradation. Atmospheric chemistry models, whether online or offline, box, regional or global use these rate constants. Expert panels synthesise laboratory measurements, making recommendations for the rate constants that should be used. This results in very similar or identical rate constants being used by all models. The inherent uncertainties in these recommendations are, in general, therefore ignored. We explore the impact of these uncertainties on the composition of the troposphere using the GEOS-Chem chemistry transport model. Based on the JPL and IUPAC evaluations we assess 50 mainly inorganic rate constants and 10 photolysis rates, through simulations where we increase the rate of the reactions to the 1σ upper value recommended by the expert panels. We assess the impact on 4 standard metrics: annual mean tropospheric ozone burden, surface ozone and tropospheric OH concentrations, and tropospheric methane lifetime. Uncertainty in the rate constants for NO2 + OH    M →  HNO3, OH + CH4 → CH3O2 + H2O and O3 + NO → NO2 + O2 are the three largest source of uncertainty in these metrics. We investigate two methods of assessing these uncertainties, addition in quadrature and a Monte Carlo approach, and conclude they give similar outcomes. Combining the uncertainties across the 60 reactions, gives overall uncertainties on the annual mean tropospheric ozone burden, surface ozone and tropospheric OH concentrations, and tropospheric methane lifetime of 11, 12, 17 and 17 % respectively. These are larger than the spread between models in recent model inter-comparisons. Remote regions such as the tropics, poles, and upper troposphere are most uncertain. This chemical uncertainty is sufficiently large to suggest that rate constant uncertainty should be considered when model results disagree with measurement. Calculations for the pre-industrial allow a tropospheric ozone radiative forcing to be calculated of 0.412 ± 0.062 Wm−2. This uncertainty (15 %) is comparable to the inter-model spread in ozone radiative forcing found in previous model-model inter-comparison studies where the rate constants used in the models are all identical or very similar. Thus the uncertainty of tropospheric ozone radiative forcing should expanded to include this additional source of uncertainty. These rate constant uncertainties are significant and suggest that refinement of supposedly well known chemical rate constants should be considered alongside other improvements to enhance our understanding of atmospheric processes.


2016 ◽  
Vol 16 (18) ◽  
pp. 11521-11534 ◽  
Author(s):  
Luis F. Millán ◽  
Nathaniel J. Livesey ◽  
Michelle L. Santee ◽  
Jessica L. Neu ◽  
Gloria L. Manney ◽  
...  

Abstract. This study investigates the representativeness of two types of orbital sampling applied to stratospheric temperature and trace gas fields. Model fields are sampled using real sampling patterns from the Aura Microwave Limb Sounder (MLS), the HALogen Occultation Experiment (HALOE) and the Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS). The MLS sampling acts as a proxy for a dense uniform sampling pattern typical of limb emission sounders, while HALOE and ACE-FTS represent coarse nonuniform sampling patterns characteristic of solar occultation instruments. First, this study revisits the impact of sampling patterns in terms of the sampling bias, as previous studies have done. Then, it quantifies the impact of different sampling patterns on the estimation of trends and their associated detectability. In general, we find that coarse nonuniform sampling patterns may introduce non-negligible errors in the inferred magnitude of temperature and trace gas trends and necessitate considerably longer records for their definitive detection. Lastly, we explore the impact of these sampling patterns on tropical vertical velocities derived from stratospheric water vapor measurements. We find that coarse nonuniform sampling may lead to a biased depiction of the tropical vertical velocities and, hence, to a biased estimation of the impact of the mechanisms that modulate these velocities. These case studies suggest that dense uniform sampling such as that available from limb emission sounders provides much greater fidelity in detecting signals of stratospheric change (for example, fingerprints of greenhouse gas warming and stratospheric ozone recovery) than coarse nonuniform sampling such as that of solar occultation instruments.


Sign in / Sign up

Export Citation Format

Share Document