scholarly journals Correlation-aided method for identification and gradation of periodicities in hydrologic time series

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ping Xie ◽  
Linqian Wu ◽  
Yan-Fang Sang ◽  
Faith Ka Shun Chan ◽  
Jie Chen ◽  
...  

AbstractIdentification of periodicities in hydrological time series and evaluation of their statistical significance are not only important for water-related studies, but also challenging issues due to the complex variability of hydrological processes. In this article, we develop a “Moving Correlation Coefficient Analysis” (MCCA) method for identifying periodicities of a time series. In the method, the correlation between the original time series and the periodic fluctuation is used as a criterion, aiming to seek out the periodic fluctuation that fits the original time series best, and to evaluate its statistical significance. Consequently, we take periodic components consisting of simple sinusoidal variation as an example, and do statistical experiments to verify the applicability and reliability of the developed method by considering various parameters changing. Three other methods commonly used, harmonic analysis method (HAM), power spectrum method (PSM) and maximum entropy method (MEM) are also applied for comparison. The results indicate that the efficiency of each method is positively connected to the length and amplitude of samples, but negatively correlated with the mean value, variation coefficient and length of periodicity, without relationship with the initial phase of periodicity. For those time series with higher noise component, the developed MCCA method performs best among the four methods. Results from the hydrological case studies in the Yangtze River basin further verify the better performances of the MCCA method compared to other three methods for the identification of periodicities in hydrologic time series.

2021 ◽  
Vol 13 (2) ◽  
pp. 542
Author(s):  
Tarate Suryakant Bajirao ◽  
Pravendra Kumar ◽  
Manish Kumar ◽  
Ahmed Elbeltagi ◽  
Alban Kuriqi

Estimating sediment flow rate from a drainage area plays an essential role in better watershed planning and management. In this study, the validity of simple and wavelet-coupled Artificial Intelligence (AI) models was analyzed for daily Suspended Sediment (SSC) estimation of highly dynamic Koyna River basin of India. Simple AI models such as the Artificial Neural Network (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) were developed by supplying the original time series data as an input without pre-processing through a Wavelet (W) transform. The hybrid wavelet-coupled W-ANN and W-ANFIS models were developed by supplying the decomposed time series sub-signals using Discrete Wavelet Transform (DWT). In total, three mother wavelets, namely Haar, Daubechies, and Coiflets were employed to decompose original time series data into different multi-frequency sub-signals at an appropriate decomposition level. Quantitative and qualitative performance evaluation criteria were used to select the best model for daily SSC estimation. The reliability of the developed models was also assessed using uncertainty analysis. Finally, it was revealed that the data pre-processing using wavelet transform improves the model’s predictive efficiency and reliability significantly. In this study, it was observed that the performance of the Coiflet wavelet-coupled ANFIS model is superior to other models and can be applied for daily SSC estimation of the highly dynamic rivers. As per sensitivity analysis, previous one-day SSC (St-1) is the most crucial input variable for daily SSC estimation of the Koyna River basin.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 659
Author(s):  
Jue Lu ◽  
Ze Wang

Entropy indicates irregularity or randomness of a dynamic system. Over the decades, entropy calculated at different scales of the system through subsampling or coarse graining has been used as a surrogate measure of system complexity. One popular multi-scale entropy analysis is the multi-scale sample entropy (MSE), which calculates entropy through the sample entropy (SampEn) formula at each time scale. SampEn is defined by the “logarithmic likelihood” that a small section (within a window of a length m) of the data “matches” with other sections will still “match” the others if the section window length increases by one. “Match” is defined by a threshold of r times standard deviation of the entire time series. A problem of current MSE algorithm is that SampEn calculations at different scales are based on the same matching threshold defined by the original time series but data standard deviation actually changes with the subsampling scales. Using a fixed threshold will automatically introduce systematic bias to the calculation results. The purpose of this paper is to mathematically present this systematic bias and to provide methods for correcting it. Our work will help the large MSE user community avoiding introducing the bias to their multi-scale SampEn calculation results.


2006 ◽  
Vol 6 (6) ◽  
pp. 11957-11970 ◽  
Author(s):  
C. Varotsos ◽  
M.-N. Assimakopoulos ◽  
M. Efstathiou

Abstract. The monthly mean values of the atmospheric carbon dioxide concentration derived from in-situ air samples collected at Mauna Loa Observatory, Hawaii, during 1958–2004 (the longest continuous record available in the world) are analyzed by employing the detrended fluctuation analysis to detect scaling behavior in this time series. The main result is that the fluctuations of carbon dioxide concentrations exhibit long-range power-law correlations (long memory) with lag times ranging from four months to eleven years, which correspond to 1/f noise. This result indicates that random perturbations in the carbon dioxide concentrations give rise to noise, characterized by a frequency spectrum following a power-law with exponent that approaches to one; the latter shows that the correlation times grow strongly. This feature is pointing out that a correctly rescaled subset of the original time series of the carbon dioxide concentrations resembles the original time series. Finally, the power-law relationship derived from the real measurements of the carbon dioxide concentrations could also serve as a tool to improve the confidence of the atmospheric chemistry-transport and global climate models.


2012 ◽  
Vol 518-523 ◽  
pp. 4039-4042
Author(s):  
Zhen Min Zhou

In order to improve the precision of medium-long term rainfall forecast, the rainfall estimation model was set up based on wavelet analysis and support vector machine (WA-SVM). It decomposed the original rainfall series to different layers through wavelet analysis, forecasted each layer by means of SVM, and finally obtained the forecast results of the original time series by composition. The model was used to estimate the monthly rainfall sequence in the watershed. Comparing with other method which only uses support vector machine(SVM), it indicates that the estimated accuracy was improved obviously.


Author(s):  
Hong-Guang Ma ◽  
Chun-Liang Zhang ◽  
Fu Li

In this paper, a new method of state space reconstruction is proposed for the nonstationary time-series. The nonstationary time-series is first converted into its analytical form via the Hilbert transform, which retains both the nonstationarity and the nonlinear dynamics of the original time-series. The instantaneous phase angle θ is then extracted from the time-series. The first- and second-order derivatives θ˙, θ¨ of phase angle θ are calculated. It is mathematically proved that the vector field [θ,θ˙,θ¨] is the state space of the original time-series. The proposed method does not rely on the stationarity of the time-series, and it is available for both the stationary and nonstationary time-series. The simulation tests have been conducted on the stationary and nonstationary chaotic time-series, and a powerful tool, i.e., the scale-dependent Lyapunov exponent (SDLE), is introduced for the identification of nonstationarity and chaotic motion embedded in the time-series. The effectiveness of the proposed method is validated.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Jingpei Dan ◽  
Weiren Shi ◽  
Fangyan Dong ◽  
Kaoru Hirota

A time series representation, piecewise trend approximation (PTA), is proposed to improve efficiency of time series data mining in high dimensional large databases. PTA represents time series in concise form while retaining main trends in original time series; the dimensionality of original data is therefore reduced, and the key features are maintained. Different from the representations that based on original data space, PTA transforms original data space into the feature space of ratio between any two consecutive data points in original time series, of which sign and magnitude indicate changing direction and degree of local trend, respectively. Based on the ratio-based feature space, segmentation is performed such that each two conjoint segments have different trends, and then the piecewise segments are approximated by the ratios between the first and last points within the segments. To validate the proposed PTA, it is compared with classical time series representations PAA and APCA on two classical datasets by applying the commonly used K-NN classification algorithm. For ControlChart dataset, PTA outperforms them by 3.55% and 2.33% higher classification accuracy and 8.94% and 7.07% higher for Mixed-BagShapes dataset, respectively. It is indicated that the proposed PTA is effective for high dimensional time series data mining.


2012 ◽  
Vol 04 (04) ◽  
pp. 1250023 ◽  
Author(s):  
KENJI KUME

Singular spectrum analysis is a nonparametric and adaptive spectral decomposition of a time series. This method consists of the singular value decomposition for the trajectory matrix constructed from the original time series, followed with the subsequent reconstruction of the decomposed series. In the present paper, we show that these procedures can be viewed simply as complete eigenfilter decomposition of the time series. The eigenfilters are constructed from the singular vectors of the trajectory matrix and the completeness of the singular vectors ensure the completeness of the eigenfilters. The present interpretation gives new insight into the singular spectrum analysis.


2020 ◽  
Author(s):  
Qiang Yu ◽  
Cheng Wang ◽  
Jing Xi ◽  
Ying Chen ◽  
Weifeng Li ◽  
...  

Abstract Background: In intensive care unit(ICU), excessive false alarms burden medical staff greatly, and cause medical resource waste as well. In order to alleviate false alarms in ICU, we constructed models for classification using convolutional neural networks, which can deal directly with time series and avoid extracting features manually. Results: Combining with grouping strategy, we tried two basic network structures, i.e. DGCN and EDGCN. After that, based on EDGCN, which was proved better, ensembling networks were also constructed to elevate the performance further. Considering of the limited sample size, different data expansions were also experimented. Finally, we tested our model in the online sandbox, and got a score of 78.14. Conclusions: Although the performance is slightly lower than the best scores that have been reported, our models are end-to-end, through which the original time series can be automatically mapped into a binary output, without manually feature extraction. In addition, our method innovatively uses grouped convolution to make full use of the information in multi-channel signals. In the end, we also discussed the potential solutions to further elevate performances.


2021 ◽  
Author(s):  
David Howe

Statistical imputation is a field of study that attempts to fill missing data. It is commonly applied to population statistics whose data have no correlation with running time. For a time series, data is typically analyzed using the autocorrelation function (ACF), the Fourier transform to estimate power spectral densities (PSD), the Allan deviation (ADEV), trend extensions, and basically any analysis that depends on uniform time indexes. We explain the rationale for an imputation algorithm that fills gaps in a time series by applying a backward, inverted replica of adjacent live data. To illustrate, four intentional massive gaps that exceed 100% of the original time series are recovered. The L(f) PSD with imputation applied to the gaps is nearly indistinguishable from the original. Also, the confidence of ADEV with imputation falls within 90% of the original ADEV with mixtures of power-law noises. The algorithm in Python is included for those wishing to try it.


Sign in / Sign up

Export Citation Format

Share Document