Time Scale Estimation Method Based on Anisotropic Diffusion

Author(s):  
Hui Shao ◽  
Hailin Zou ◽  
Yincheng Liang ◽  
Wenjun Li ◽  
Qian Gao
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 31057-31068
Author(s):  
Xianglei Yin ◽  
Guixi Liu ◽  
Xiaohong Ma

2000 ◽  
Vol 21 (3) ◽  
pp. 237-248
Author(s):  
John Belcher ◽  
Granville Tunnicliffe Wilson

2015 ◽  
Vol 9 (2) ◽  
Author(s):  
Christian Marx

AbstractThe identification of outliers in measurement data is hindered if they are present in leverage points as well as in rest of the data. A promising method for their identification is the Monte Carlo estimation (MCE), which is subject of the present investigation. In MCE the data are searched for data subsamples without leverage outliers and with few (or no) non-leverage outliers by a random generation of subsamples. The required number of subsamples by which several of such subsamples are generated with a given probability is derived. Each generated subsample is rated based on the residuals resulting from an adjustment. By means of a simulation it is shown that a least squares adjustment is suitable. For the rating of the subsamples, the sum of squared residuals is used as a measure of the fit. It is argued that this (unweighted) sum is also appropriate if data have unequal weights. An investigation of the robustness of a final Bayes estimation with the result of the Monte Carlo search as prior information reveals its inappropriateness. Furthermore, the case of an unknown variance factor is considered. A simulation for different scale estimators for the variance factor shows their impracticalness. A new resistant scale estimator is introduced which is based on a generalisation of the median absolut deviation. Taking into account the results of the investigations, a new procedure for MCE considering a scale estimation is proposed. Finally, this method is tested by simulation. MCE turns out to be more reliable in the identification of outliers than a conventional resistant estimation method.


Author(s):  
Josip Arnerić

AbstractAvailability of high-frequency data, in line with IT developments, enables the use of Availability of high-frequency data, in line with IT developments, enables the use of more information to estimate not only the variance (volatility), but also higher realized moments and the entire realized distribution of returns. Old-fashioned approaches use only closing prices and assume that underlying distribution is time-invariant, which makes traditional forecasting models unreliable. Moreover, time-varying realized moments support findings that returns are not identically distributed across trading days. The objective of the paper is to find an appropriate data-driven distribution of returns using high-frequency data. The kernel estimation method is applied to DAX intraday prices, which balances between the bias and the variance of the realized moments with respect to the bandwidth selection as well as the sampling frequency selection. The main finding is that the kernel bandwidth is strongly related to the sampling frequency at the slow-time-time scale when applying a two-scale estimator, while the fast-time-time scale sampling frequency is held fixed. The realized kernel density estimation enriches the literature by providing the best data-driven proxy of the true but unknown probability density function of returns, which can be used as a benchmark in comparison against ex-ante or implied driven moments.


2018 ◽  
Vol 15 (1) ◽  
pp. 172988141775151 ◽  
Author(s):  
ZL Wang ◽  
BG Cai

The core part of the popular tracking-by-detection trackers is the discriminative classifier, which distinguishes the tracked target from the surrounding environment. Correlation filter-based visual tracking methods have the advantage of computing efficiency over the traditional methods by exploiting the properties of circulant matrix in learning process, and the significant progress in efficiency has been achieved by making use of the fast Fourier transform at detection and learning stages. But most existing correlation filter-based approaches are mainly restricted to translation estimation, which are susceptible to drifting in long-term tracking. In this article, a compressed multiple feature and adaptive scale estimation method is presented, which uses multiple features, including histogram of orientation gradients, color-naming, and raw pixel value to further improve the stability and accuracy of translation estimation. And for the scale estimation, another correlation filter is trained, which uses the compressed histogram of orientation gradients and raw pixel value to construct a multiscale pyramid of the target, and the optimal scale is obtained by exhaustively searching. The translation and scale estimation are unified with an iterative searching strategy. Extensively experimental results on the benchmark data set of scale variation show that the performance of the proposed compressed multiple feature and adaptive scale estimation algorithm is competitive against state-of-the-art methods with scale estimation capabilities in terms of robustness and accuracy.


Sign in / Sign up

Export Citation Format

Share Document