Rate of Convergence in Density Estimation Using Neural Networks

1996 ◽  
Vol 8 (5) ◽  
pp. 1107-1122 ◽  
Author(s):  
Dharmendra S. Modha ◽  
Elias Masry

Given N i.i.d. observations {Xi}Ni=1 taking values in a compact subset of Rd, such that p* denotes their common probability density function, we estimate p* from an exponential family of densities based on single hidden layer sigmoidal networks using a certain minimum complexity density estimation scheme. Assuming that p* possesses a certain exponential representation, we establish a rate of convergence, independent of the dimension d, for the expected Hellinger distance between the proposed minimum complexity density estimator and the true underlying density p*.

1968 ◽  
Vol 64 (2) ◽  
pp. 481-483 ◽  
Author(s):  
J. K. Wani

In this paper we give a characterization theorem for a subclass of the exponential family whose probability density function is given bywhere a(x) ≥ 0, f(ω) = ∫a(x) exp (ωx) dx and ωx is to be interpreted as a scalar product. The random variable X may be an s-vector. In that case ω will also be an s-vector. For obvious reasons we will call (1) as the linear exponential family. It is easy to verify that the moment generating function (m.g.f.) of (1) is given by


2021 ◽  
Vol 54 (2) ◽  
pp. 99-121
Author(s):  
Yogendra P. Chaubey ◽  
Nhat Linh Vu

In this paper, we are interested in estimating the entropy of a non-negative random variable. Since the underlying probability density function is unknown, we propose the use of the Poisson smoothed histogram density estimator to estimate the entropy. To study the per- formance of our estimator, we run simulations on a wide range of densities and compare our entropy estimators with the existing estimators based on different approaches such as spacing estimators. Furthermore, we extend our study to residual entropy estimators which is the entropy of a random variable given that it has been survived up to time $t$.


2009 ◽  
Vol 19 (05) ◽  
pp. 345-357 ◽  
Author(s):  
EZEQUIEL LÓPEZ-RUBIO

Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L 1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.


2020 ◽  
Vol 13 (9) ◽  
pp. 205
Author(s):  
Timothy Fortune ◽  
Hailin Sang

In this paper, we estimate the Shannon entropy S(f)=−E[log(f(x))] of a one-sided linear process with probability density function f(x). We employ the integral estimator Sn(f), which utilizes the standard kernel density estimator fn(x) of f(x). We show that Sn(f) converges to S(f) almost surely and in Ł2 under reasonable conditions.


2021 ◽  
pp. 107754632110201
Author(s):  
Mohammad Ali Heravi ◽  
Seyed Mehdi Tavakkoli ◽  
Alireza Entezami

In this article, the autoregressive time series analysis is used to extract reliable features from vibration measurements of civil structures for damage diagnosis. To guarantee the adequacy and applicability of the time series model, Leybourne–McCabe hypothesis test is used. Subsequently, the probability density functions of the autoregressive model parameters and residuals are obtained with the aid of a kernel density estimator. The probability density function sets are considered as damage-sensitive features of the structure and fast distance correlation method is used to make decision for detecting damages in the structure. Experimental data of a well-known three-story laboratory frame and a large-scale bridge benchmark structure are used to verify the efficiency and accuracy of the proposed method. Results indicate the capability of the method to identify the location and severity of damages, even under the simulated operational and environmental variability.


2012 ◽  
Vol 460 ◽  
pp. 189-192
Author(s):  
Hong Ying Hu ◽  
Chun Ming Kan

Empirical Mode Decomposition (EMD) is a non-stationary signal processing method developed recently. It has been applied in many engineering fields. EMD has many similarities with wavelet decomposition. But EMD Decomposition has its own characteristics, especially in accurate rend extracting. Therefore the paper firstly proposes an algorithm of extracting slow-varying trend based on EMD. Then, according to wavelet probability density function estimation method, a new density estimation method based on EMD is presented. The simulations of Gaussian single and mixture model density estimation prove the advantages of the approach with easy computation and more accurate result


Sign in / Sign up

Export Citation Format

Share Document