simulated time series
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 8)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 10 (3) ◽  
pp. 31-38
Author(s):  
Stefano Menichetti ◽  
Stefano Tessitore

This paper highlights the potentiality of the time series decomposition applied to transient regime groundwater flow models, as water balance management tool. In particular, this work presents results obtained by applying statistical analysis to some observed time series and to time series derived from the groundwater flow model of the coastal plain of Cecina (Tuscany region, Italy), developed in transient regime within the period 2005-2017. The time series of rainfall, river stage and hydraulic heads were firstly analysed, and then time series decomposition was applied to the “accumulated net storage”, to finally discern and quantify two meaningful components of the groundwater budget, the regulatory reserve (Wr = 22 Mm3) and the seasonal resource (Wd = 2.5 Mm3). These values compared with withdrawal volumes (average of 6.4 Mm3/y within the period 2005-2017) allowed to highlight potentially critical balance conditions, especially in periods with repeated negative climatic trends. Operational monitoring and modeling as following corrective and planning actions for the groundwater resource are suggested.


Author(s):  
Mia S Lundkvist ◽  
Hans-Günter Ludwig ◽  
Remo Collet ◽  
Thomas Straus

Abstract The granulation background seen in the power spectrum of a solar-like oscillator poses a serious challenge for extracting precise and detailed information about the stellar oscillations. Using a 3D hydrodynamical simulation of the Sun computed with CO5BOLD, we investigate various background models to infer, using a Bayesian methodology, which one provides the best fit to the background in the simulated power spectrum. We find that the best fit is provided by an expression including the overall power level and two characteristic frequencies, one with an exponent of 2 and one with a free exponent taking on a value around 6. We assess the impact of the 3D hydro-code on this result by repeating the analysis with a simulation from Stagger and find that the main conclusion is unchanged. However, the details of the resulting best fits differ slightly between the two codes, but we explain this difference by studying the effect of the spatial resolution and the duration of the simulation on the fit. Additionally, we look into the impact of adding white noise to the simulated time series as a simple way to mimic a real star. We find that, as long as the noise level is not too low, the results are consistent with the no-noise case.


2020 ◽  
Vol 4 ◽  
pp. 27
Author(s):  
Daniel M. Weinberger ◽  
Joshua L. Warren

When evaluating the effects of vaccination programs, it is common to estimate changes in rates of disease before and after vaccine introduction. There are a number of related approaches that attempt to adjust for trends unrelated to the vaccine and to detect changes that coincide with introduction. However, characteristics of the data can influence the ability to estimate such a change. These include, but are not limited to, the number of years of available data prior to vaccine introduction, the expected strength of the effect of the intervention, the strength of underlying secular trends, and the amount of unexplained variability in the data. Sources of unexplained variability include model misspecification, epidemics due to unidentified pathogens, and changes in ascertainment or coding practice among others. In this study, we present a simple simulation framework for estimating the power to detect a decline and the precision of these estimates. We use real-world data from a pre-vaccine period to generate simulated time series where the vaccine effect is specified a priori. We present an interactive web-based tool to implement this approach. We also demonstrate the use of this approach using observed data on pneumonia hospitalization from the states in Brazil from a period prior to introduction of pneumococcal vaccines to generate the simulated time series. We relate the power of the hypothesis tests to the number of cases per year and the amount of unexplained variability in the data and demonstrate how fewer years of data influence the results.


2020 ◽  
Vol 4 ◽  
pp. 27
Author(s):  
Daniel M. Weinberger ◽  
Joshua L. Warren

When evaluating the effects of vaccination programs, it is common to estimate changes in rates of disease before and after vaccine introduction. There are a number of related approaches that attempt to adjust for trends unrelated to the vaccine and to detect changes that coincide with introduction. However, characteristics of the data can influence the ability to estimate such a change. These include, but are not limited to, the number of years of available data prior to vaccine introduction, the expected strength of the effect of the intervention, the strength of underlying secular trends, and the amount of unexplained variability in the data. Sources of unexplained variability include model misspecification, epidemics due to unidentified pathogens, and changes in ascertainment or coding practice among others. In this study, we present a simple simulation framework for estimating the power to detect a decline and the precision of these estimates. We use real-world data from a pre-vaccine period to generate simulated time series where the vaccine effect is specified a priori. We present an interactive web-based tool to implement this approach. We also demonstrate the use of this approach using observed data on pneumonia hospitalization from the states in Brazil from a period prior to introduction of pneumococcal vaccines to generate the simulated time series. We relate the power of the hypothesis tests to the number of cases per year and the amount of unexplained variability in the data and demonstrate how fewer years of data influence the results.


2019 ◽  
Vol 11 (23) ◽  
pp. 2779 ◽  
Author(s):  
Katie Awty-Carroll ◽  
Pete Bunting ◽  
Andy Hardy ◽  
Gemma Bell

Access to temporally dense time series such as data from the Landsat and Sentinel-2 missions has lead to an increase in methods which aim to monitor land cover change on a per-acquisition rather than a yearly basis. Evaluating the accuracy and limitations of these methods can be difficult because validation data are limited and often rely on human interpretation. Simulated time series offer an objective method for evaluating and comparing between change detection algorithms. A set of simulated time series was used to evaluate four change detection methods: (1) Breaks for Additive and Seasonal Trend (BFAST); (2) BFAST Monitor; (3) Continuous Change Detection and Classification (CCDC); and (4) Exponentially Weighted Moving Average Change Detection (EWMACD). In total, 151,200 simulations were generated to represent a range of abrupt, gradual, and seasonal changes. EWMACD was found to give the best performance overall, correctly identifying the true date of change in 76.6% of cases. CCDC performed worst (51.8%). BFAST performed well overall but correctly identified less than 10% of seasonal changes (changes in amplitude, length of season, or number of seasons). All methods showed some decrease in performance with increased noise and missing data, apart from BFAST Monitor which improved when data were removed. The following recommendations are made as a starting point for future studies: EWMACD should be used for detection of lower magnitude changes and changes in seasonality; CCDC should be used for robust detection of complete land cover class changes; EWMACD and BFAST are suitable for noisy datasets, depending on the application; and CCDC should be used where there are high quantities of missing data. The simulated datasets have been made freely available online as a foundation for future work.


Forecasting ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 189-204 ◽  
Author(s):  
Mahdi Kalantari ◽  
Hossein Hassani

Singular spectrum analysis (SSA) is a non-parametric forecasting and filtering method that has many applications in a variety of fields such as signal processing, economics and time series analysis. One of the four steps of the SSA, which is called the grouping step, plays a pivotal role in the SSA because reconstruction and forecasting of results are directly affected by the outputs of this step. Usually, the grouping step of SSA is time consuming as the interpretable components are manually selected. An alternative more optimized approach is to apply automatic grouping methods. In this paper, a new dissimilarity measure between two components of a time series that is based on various matrix norms is first proposed. Then, using the new dissimilarity matrices, the capabilities of different hierarchical clustering linkages are compared to identify appropriate groups in the SSA grouping step. The performance of the proposed approach is assessed using the corrected Rand index as validation criterion and utilizing various real-world and simulated time series.


2019 ◽  
Vol 19 (01) ◽  
pp. 2050010
Author(s):  
Mahdi Kalantari ◽  
Hossein Hassani ◽  
Emmanuel Sirimal Silva

Singular Spectrum Analysis (SSA) is an increasingly popular time series filtering and forecasting technique. Owing to its widespread applications in a variety of fields, there is a growing interest towards improving its forecasting capabilities. As such, this paper takes into consideration the Recurrent forecasting approach in SSA (SSA-R) and presents a new mechanism for improving the accuracy of forecasts attainable via this method. The proposed Recurrent SSA-R approach is referred to as Weighted SSA-R (W:SSA-R), and we propose using a weighting algorithm for weigthing the coefficients of the Linear Recurrent Relation (LRR). The performance of forecasts from the W:SSA-R approach are compared with forecasts from the established SSA-R approach. We exploit real data and various simulated time series for the comparison, so as to provide the reader with more conclusive findings. Our results confirm that the W:SSA-R approach can provide comparatively more accurate forecasts and is indeed a viable solution for improving forecasts by SSA.


Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 306 ◽  
Author(s):  
Jose Cánovas ◽  
Antonio Guillamón ◽  
María Ruiz-Abellón

Two distances based on permutations are considered to measure the similarity of two time series according to their strength of dependency. The distance measures are used together with different linkages to get hierarchical clustering methods of time series by dependency. We apply these distances to both simulated theoretical and real data series. For simulated time series the distances show good clustering results, both in the case of linear and non-linear dependencies. The effect of the embedding dimension and the linkage method are also analyzed. Finally, several real data series are properly clustered using the proposed method.


2018 ◽  
Vol 22 (9) ◽  
pp. 4633-4648 ◽  
Author(s):  
Alessio Pugliese ◽  
Simone Persiano ◽  
Stefano Bagli ◽  
Paolo Mazzoli ◽  
Juraj Parajka ◽  
...  

Abstract. Our study develops and tests a geostatistical technique for locally enhancing macro-scale rainfall–runoff simulations on the basis of observed streamflow data that were not used in calibration. We consider Tyrol (Austria and Italy) and two different types of daily streamflow data: macro-scale rainfall–runoff simulations at 11 prediction nodes and observations at 46 gauged catchments. The technique consists of three main steps: (1) period-of-record flow–duration curves (FDCs) are geostatistically predicted at target ungauged basins, for which macro-scale model runs are available; (2) residuals between geostatistically predicted FDCs and FDCs constructed from simulated streamflow series are computed; (3) the relationship between duration and residuals is used for enhancing simulated time series at target basins. We apply the technique in cross-validation to 11 gauged catchments, for which simulated and observed streamflow series are available over the period 1980–2010. Our results show that (1) the procedure can significantly enhance macro-scale simulations (regional LNSE increases from nearly zero to ≈0.7) and (2) improvements are significant for low gauging network densities (i.e. 1 gauge per 2000 km2).


2017 ◽  
Author(s):  
Alessio Pugliese ◽  
Simone Persiano ◽  
Stefano Bagli ◽  
Paolo Mazzoli ◽  
Juraj Parajka ◽  
...  

Abstract. Our study develops and tests a geostatistical technique for locally enhancing macro-scale rainfall-runoff simulations on the basis of observed streamflow data that were not used in calibration. We consider Tyrol (Austria and Italy) and two different types of daily streamflow data: macro-scale rainfall-runoff simulations at 11 prediction nodes and observations at 46 gauged catchments. The technique consists of three main steps: (1) period-of-record flow-duration curves (FDCs) are geostatistically predicted at target ungauged basins, for which macro-scale model runs are available; (2) residuals between geostatistically predicted FDCs and FDCs constructed from simulated streamflow series are computed; (3) the relationship between duration and residuals is used for enhancing simulated time series at target basins. We apply the technique in cross-validation to 11 gauged catchments, for which simulated and observed streamflow series are available over the period 1980–2010. Our results show that (1) the procedure can significantly enhance macro-scale simulations (regional NSE increases from nearly zero to ≈ 0.7) and (2) improvements are significant for low gauging network densities (i.e. 1 gauge per 2000 km2).


Sign in / Sign up

Export Citation Format

Share Document