Precipitation Forecast Based on WA-SVM Theory

2012 ◽  
Vol 518-523 ◽  
pp. 4039-4042
Author(s):  
Zhen Min Zhou

In order to improve the precision of medium-long term rainfall forecast, the rainfall estimation model was set up based on wavelet analysis and support vector machine (WA-SVM). It decomposed the original rainfall series to different layers through wavelet analysis, forecasted each layer by means of SVM, and finally obtained the forecast results of the original time series by composition. The model was used to estimate the monthly rainfall sequence in the watershed. Comparing with other method which only uses support vector machine(SVM), it indicates that the estimated accuracy was improved obviously.

2000 ◽  
Vol 278 (6) ◽  
pp. R1446-R1452 ◽  
Author(s):  
Xiaobin Zhang ◽  
Eugene N. Bruce

The correlation structure of breath-to-breath fluctuations of end-expiratory lung volume (EEV) was studied in anesthetized rats with intact airways subjected to positive and negative transrespiratory pressure (i.e., PTRP and NTRP, correspondingly). The Hurst exponent, H, was estimated from EEV fluctuations using modified dispersional analysis. We found that H for EEV was 0.5362 ± 0.0763 and 0.6403 ± 0.0561 with PTRP and NTRP, respectively (mean ± SD). Both H were significantly different from those obtained after random shuffling of the original time series. Also, H with NTRP was significantly greater than that with PTRP ( P = 0.029). We conclude that in rats breathing through the upper airway, a positive long-term correlation is present in EEV that is different between PTRP and NTRP.


2014 ◽  
Vol 11 (91) ◽  
pp. 20130585 ◽  
Author(s):  
Bernard Cazelles ◽  
Kévin Cazelles ◽  
Mario Chavez

Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method.


Author(s):  
ANOUAR BEN MABROUK ◽  
HEDI KORTAS ◽  
ZOUHAIER DHIFAOUI

In this paper, a hybrid scheme for time series prediction is developed based on wavelet decomposition combined with Bayesian Least Squares Support Vector Machine regression. As a filtering step, using the Maximal Overlap Discrete Wavelet Transform, the original time series is mapped on a scale-by-scale basis yielding an outcome set of new time series with simpler temporal dynamic structures. Next, a scale-by-scale Bayesian Least Squares Support Vector Machine predictor is provided. Individual scale predictions are subsequently recombined to yield an overall forecast. The relevance of the suggested procedure is shown on the NINO3 SST anomaly index via a comparison with the existing methods for modeling and prediction.


2014 ◽  
Vol 1044-1045 ◽  
pp. 955-958
Author(s):  
Xing Jin

A common method of debris detection to detect the condition of the engine is to establish the relationship between wearing condition and the elements in debris. To accomplish this aim, the ARIMA and Auto-Regressive models should be set up by analyzing the original time series which is established in accordance with the data of debris. After that, the trend of engine wear in a cycle of overhaul can be verified and the best model can be found depending on the situation of real data of debris. So the original time series can be obtained and the time series model to monitor engine condition can be set up. This paper finally provides a reference for monitoring engine conditions and other familiar fields.


2021 ◽  
Vol 13 (2) ◽  
pp. 542
Author(s):  
Tarate Suryakant Bajirao ◽  
Pravendra Kumar ◽  
Manish Kumar ◽  
Ahmed Elbeltagi ◽  
Alban Kuriqi

Estimating sediment flow rate from a drainage area plays an essential role in better watershed planning and management. In this study, the validity of simple and wavelet-coupled Artificial Intelligence (AI) models was analyzed for daily Suspended Sediment (SSC) estimation of highly dynamic Koyna River basin of India. Simple AI models such as the Artificial Neural Network (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) were developed by supplying the original time series data as an input without pre-processing through a Wavelet (W) transform. The hybrid wavelet-coupled W-ANN and W-ANFIS models were developed by supplying the decomposed time series sub-signals using Discrete Wavelet Transform (DWT). In total, three mother wavelets, namely Haar, Daubechies, and Coiflets were employed to decompose original time series data into different multi-frequency sub-signals at an appropriate decomposition level. Quantitative and qualitative performance evaluation criteria were used to select the best model for daily SSC estimation. The reliability of the developed models was also assessed using uncertainty analysis. Finally, it was revealed that the data pre-processing using wavelet transform improves the model’s predictive efficiency and reliability significantly. In this study, it was observed that the performance of the Coiflet wavelet-coupled ANFIS model is superior to other models and can be applied for daily SSC estimation of the highly dynamic rivers. As per sensitivity analysis, previous one-day SSC (St-1) is the most crucial input variable for daily SSC estimation of the Koyna River basin.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 659
Author(s):  
Jue Lu ◽  
Ze Wang

Entropy indicates irregularity or randomness of a dynamic system. Over the decades, entropy calculated at different scales of the system through subsampling or coarse graining has been used as a surrogate measure of system complexity. One popular multi-scale entropy analysis is the multi-scale sample entropy (MSE), which calculates entropy through the sample entropy (SampEn) formula at each time scale. SampEn is defined by the “logarithmic likelihood” that a small section (within a window of a length m) of the data “matches” with other sections will still “match” the others if the section window length increases by one. “Match” is defined by a threshold of r times standard deviation of the entire time series. A problem of current MSE algorithm is that SampEn calculations at different scales are based on the same matching threshold defined by the original time series but data standard deviation actually changes with the subsampling scales. Using a fixed threshold will automatically introduce systematic bias to the calculation results. The purpose of this paper is to mathematically present this systematic bias and to provide methods for correcting it. Our work will help the large MSE user community avoiding introducing the bias to their multi-scale SampEn calculation results.


Sign in / Sign up

Export Citation Format

Share Document