bandwidth parameter
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 8)

H-INDEX

9
(FIVE YEARS 0)

Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 117
Author(s):  
Xuyou Li ◽  
Yanda Guo ◽  
Qingwen Meng

The maximum correntropy Kalman filter (MCKF) is an effective algorithm that was proposed to solve the non-Gaussian filtering problem for linear systems. Compared with the original Kalman filter (KF), the MCKF is a sub-optimal filter with Gaussian correntropy objective function, which has been demonstrated to have excellent robustness to non-Gaussian noise. However, the performance of MCKF is affected by its kernel bandwidth parameter, and a constant kernel bandwidth may lead to severe accuracy degradation in non-stationary noises. In order to solve this problem, the mixture correntropy method is further explored in this work, and an improved maximum mixture correntropy KF (IMMCKF) is proposed. By derivation, the random variables that obey Beta-Bernoulli distribution are taken as intermediate parameters, and a new hierarchical Gaussian state-space model was established. Finally, the unknown mixing probability and state estimation vector at each moment are inferred via a variational Bayesian approach, which provides an effective solution to improve the applicability of MCKFs in non-stationary noises. Performance evaluations demonstrate that the proposed filter significantly improves the existing MCKFs in non-stationary noises.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Weijie Zhou ◽  
Huihui Tao ◽  
Feifei Wang ◽  
Weiqiang Pan

In this paper, the optimal bandwidth parameter is investigated in the GPH algorithm. Firstly, combining with the stylized facts of financial time series, we generate long memory sequences by using the ARFIMA (1, d, 1) process. Secondly, we use the Monte Carlo method to study the impact of the GPH algorithm on existence test, persistence or antipersistence judgment of long memory, and the estimation accuracy of the long memory parameter. The results show that the accuracy of above three factors in the long memory test reached a relatively high level within the bandwidth parameter interval of 0.5 < a < 0.7. For different lengths of time series, bandwidth parameter a = 0.6 can be used as the optimal choice of the GPH estimation. Furthermore, we give the calculation accuracy of the GPH algorithm on existence, persistence or antipersistence of long memory, and long memory parameter d when a = 0.6.


2021 ◽  
Vol 28 (3) ◽  
pp. 553-574
Author(s):  
Silvia Bianconcini ◽  
Benoit Quenneville

Recently, reproducing kernel Hilbert spaces have been introduced to provide a common approach for studying several nonparametric estimators used for smoothing functional time series data (Dagum and Bianconcini, 2006 and 2008). The reproducing kernel representation is based on the derivation of the density function (i.e. a second order kernel) embedded on the linear filter. This is the starting point for deriving higher order kernels, which are obtained from the product of the density and its orthonormal polynomials. This paper focuses on the Henderson filter, for which two density functions and corresponding hierarchies have been derived. The properties of the Henderson reproducing kernels are analyzed when the filters are adapted at the end of the sample period. The optimality criterion satisfied as well as the influence of the kernel order and bandwidth parameter are studied.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 9
Author(s):  
J. Eduardo Vera-Valdés

Econometric studies for global heating have typically used regional or global temperature averages to study its long memory properties. One typical explanation behind the long memory properties of temperature averages is cross-sectional aggregation. Nonetheless, formal analysis regarding the effect that aggregation has on the long memory dynamics of temperature data has been missing. Thus, this paper studies the long memory properties of individual grid temperatures and compares them against the long memory dynamics of global and regional averages. Our results show that the long memory parameters in individual grid observations are smaller than those from regional averages. Global and regional long memory estimates are greatly affected by temperature measurements at the Tropics, where the data is less reliable. Thus, this paper supports the notion that aggregation may be exacerbating the long memory estimated in regional and global temperature data. The results are robust to the bandwidth parameter, limit for station radius of influence, and sampling frequency.


2021 ◽  
Vol 27 (1) ◽  
pp. 57-69
Author(s):  
Yasmina Ziane ◽  
Nabil Zougab ◽  
Smail Adjabi

Abstract In this paper, we consider the procedure for deriving variable bandwidth in univariate kernel density estimation for nonnegative heavy-tailed (HT) data. These procedures consider the Birnbaum–Saunders power-exponential (BS-PE) kernel estimator and the bayesian approach that treats the adaptive bandwidths. We adapt an algorithm that subdivides the HT data set into two regions, high density region (HDR) and low-density region (LDR), and we assign a bandwidth parameter for each region. They are derived by using a Monte Carlo Markov chain (MCMC) sampling algorithm. A series of simulation studies and real data are realized for evaluating the performance of a procedure proposed.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-23
Author(s):  
Haien Wang ◽  
Xingxing Jiang ◽  
Wenjun Guo ◽  
Juanjuan Shi ◽  
Zhongkui Zhu

Currently, study on the relevant methods of variational mode decomposition (VMD) is mainly focused on the selection of the number of decomposed modes and the bandwidth parameter using various optimization algorithms. Most of these methods utilize the genetic-like algorithms to quantitatively analyze these parameters, which increase the additional initial parameters and inevitably the computational burden due to ignoring the inherent characteristics of the VMD. From the perspective to locate the initial center frequency (ICF) during the VMD decomposition process, we propose an enhanced VMD with the guidance of envelope negentropy spectrum for bearing fault diagnosis, thus effectively avoiding the drawbacks of the current VMD-based algorithms. First, the ICF is coarsely located by envelope negentropy spectrum (ENS) and the fault-related modes are fast extracted by incorporating the ICF into the VMD. Then, the fault-related modes are adaptively optimized by adjusting the bandwidth parameters. Lastly, in order to identify fault-related features, the Hilbert envelope demodulation technique is used to analyze the optimal mode obtained by the proposed method. Analysis results of simulated and experimental data indicate that the proposed method is effective to extract the weak faulty characteristics of bearings and has advantage over some advanced methods. Moreover, a discussion on the extension of the proposed method is put forward to identify multicomponents for broadening its applied scope.


2019 ◽  
Vol 127 (9) ◽  
pp. 507
Author(s):  
С.И. Зиенко ◽  
Д.С. Слабковский

AbstractTo identify the signs that distinguish natural diamonds from artificial diamonds, a comparative analysis of the luminescence spectra with regards to the Q factor, center of gravity, bandwidth parameter, and energy losses in the diamond crystal lattice under conditions of ohmic and dielectric relaxation of luminescence is performed. The phenomenon of resonant luminescence in the femtosecond time range is detected in diamond. It is established that natural and artificial diamonds noticeably differ in the relaxation frequency and in the energy of resonant radiation.


2018 ◽  
Author(s):  
Taylor Oshan ◽  
Ziqi Li ◽  
Wei Kang ◽  
Levi John Wolf ◽  
Alexander Stewart Fotheringham

Geographically weighted regression (GWR) is a spatial statistical technique that recognizes traditional 'global' regression models may be limited when spatial processes vary with spatial context. GWR captures process spatial heterogeneity via an operationalization of Tobler's first law of geography: "everything is related to everything else, but near things are more related than distant things" (1970). An ensemble of local linear models are calibrated at any number of locations by 'borrowing' nearby data. The result is a surface of location-specific parameter estimates for each relationship in the model that may vary spatially, as well as a single bandwidth parameter that provides intuition about the geographic scale of the processes. A recent extension to this framework allows each relationship to vary according to a distinct spatial scale parameter, and is therefore known as multiscale (M)GWR. This paper introduces mgwr, a Python-based implementation for efficiently calibrating a variety of (M)GWR models and a selection of associated diagnostics. It reviews some core concepts, introduces the primary software functionality, and demonstrates suggested usage on several example datasets.


Sign in / Sign up

Export Citation Format

Share Document